Tag Archives: Challenge

Implement continuous integration and delivery of serverless AWS Glue ETL applications using AWS Developer Tools

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-serverless-aws-glue-etl-applications-using-aws-developer-tools/

AWS Glue is an increasingly popular way to develop serverless ETL (extract, transform, and load) applications for big data and data lake workloads. Organizations that transform their ETL applications to cloud-based, serverless ETL architectures need a seamless, end-to-end continuous integration and continuous delivery (CI/CD) pipeline: from source code, to build, to deployment, to product delivery. Having a good CI/CD pipeline can help your organization discover bugs before they reach production and deliver updates more frequently. It can also help developers write quality code and automate the ETL job release management process, mitigate risk, and more.

AWS Glue is a fully managed data catalog and ETL service. It simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, and job scheduling. AWS Glue crawls your data sources and constructs a data catalog using pre-built classifiers for popular data formats and data types, including CSV, Apache Parquet, JSON, and more.

When you are developing ETL applications using AWS Glue, you might come across some of the following CI/CD challenges:

  • Iterative development with unit tests
  • Continuous integration and build
  • Pushing the ETL pipeline to a test environment
  • Pushing the ETL pipeline to a production environment
  • Testing ETL applications using real data (live test)
  • Exploring and validating data

In this post, I walk you through a solution that implements a CI/CD pipeline for serverless AWS Glue ETL applications supported by AWS Developer Tools (including AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild) and AWS CloudFormation.

Solution overview

The following diagram shows the pipeline workflow:

This solution uses AWS CodePipeline, which lets you orchestrate and automate the test and deploy stages for ETL application source code. The solution consists of a pipeline that contains the following stages:

1.) Source Control: In this stage, the AWS Glue ETL job source code and the AWS CloudFormation template file for deploying the ETL jobs are both committed to version control. I chose to use AWS CodeCommit for version control.

To get the ETL job source code and AWS CloudFormation template, download the gluedemoetl.zip file. This solution is developed based on a previous post, Build a Data Lake Foundation with AWS Glue and Amazon S3.

2.) LiveTest: In this stage, all resources—including AWS Glue crawlers, jobs, S3 buckets, roles, and other resources that are required for the solution—are provisioned, deployed, live tested, and cleaned up.

The LiveTest stage includes the following actions:

  • Deploy: In this action, all the resources that are required for this solution (crawlers, jobs, buckets, roles, and so on) are provisioned and deployed using an AWS CloudFormation template.
  • AutomatedLiveTest: In this action, all the AWS Glue crawlers and jobs are executed and data exploration and validation tests are performed. These validation tests include, but are not limited to, record counts in both raw tables and transformed tables in the data lake and any other business validations. I used AWS CodeBuild for this action.
  • LiveTestApproval: This action is included for the cases in which a pipeline administrator approval is required to deploy/promote the ETL applications to the next stage. The pipeline pauses in this action until an administrator manually approves the release.
  • LiveTestCleanup: In this action, all the LiveTest stage resources, including test crawlers, jobs, roles, and so on, are deleted using the AWS CloudFormation template. This action helps minimize cost by ensuring that the test resources exist only for the duration of the AutomatedLiveTest and LiveTestApproval

3.) DeployToProduction: In this stage, all the resources are deployed using the AWS CloudFormation template to the production environment.

Try it out

This code pipeline takes approximately 20 minutes to complete the LiveTest test stage (up to the LiveTest approval stage, in which manual approval is required).

To get started with this solution, choose Launch Stack:

This creates the CI/CD pipeline with all of its stages, as described earlier. It performs an initial commit of the sample AWS Glue ETL job source code to trigger the first release change.

In the AWS CloudFormation console, choose Create. After the template finishes creating resources, you see the pipeline name on the stack Outputs tab.

After that, open the CodePipeline console and select the newly created pipeline. Initially, your pipeline’s CodeCommit stage shows that the source action failed.

Allow a few minutes for your new pipeline to detect the initial commit applied by the CloudFormation stack creation. As soon as the commit is detected, your pipeline starts. You will see the successful stage completion status as soon as the CodeCommit source stage runs.

In the CodeCommit console, choose Code in the navigation pane to view the solution files.

Next, you can watch how the pipeline goes through the LiveTest stage of the deploy and AutomatedLiveTest actions, until it finally reaches the LiveTestApproval action.

At this point, if you check the AWS CloudFormation console, you can see that a new template has been deployed as part of the LiveTest deploy action.

At this point, make sure that the AWS Glue crawlers and the AWS Glue job ran successfully. Also check whether the corresponding databases and external tables have been created in the AWS Glue Data Catalog. Then verify that the data is validated using Amazon Athena, as shown following.

Open the AWS Glue console, and choose Databases in the navigation pane. You will see the following databases in the Data Catalog:

Open the Amazon Athena console, and run the following queries. Verify that the record counts are matching.

SELECT count(*) FROM "nycitytaxi_gluedemocicdtest"."data";
SELECT count(*) FROM "nytaxiparquet_gluedemocicdtest"."datalake";

The following shows the raw data:

The following shows the transformed data:

The pipeline pauses the action until the release is approved. After validating the data, manually approve the revision on the LiveTestApproval action on the CodePipeline console.

Add comments as needed, and choose Approve.

The LiveTestApproval stage now appears as Approved on the console.

After the revision is approved, the pipeline proceeds to use the AWS CloudFormation template to destroy the resources that were deployed in the LiveTest deploy action. This helps reduce cost and ensures a clean test environment on every deployment.

Production deployment is the final stage. In this stage, all the resources—AWS Glue crawlers, AWS Glue jobs, Amazon S3 buckets, roles, and so on—are provisioned and deployed to the production environment using the AWS CloudFormation template.

After successfully running the whole pipeline, feel free to experiment with it by changing the source code stored on AWS CodeCommit. For example, if you modify the AWS Glue ETL job to generate an error, it should make the AutomatedLiveTest action fail. Or if you change the AWS CloudFormation template to make its creation fail, it should affect the LiveTest deploy action. The objective of the pipeline is to guarantee that all changes that are deployed to production are guaranteed to work as expected.

Conclusion

In this post, you learned how easy it is to implement CI/CD for serverless AWS Glue ETL solutions with AWS developer tools like AWS CodePipeline and AWS CodeBuild at scale. Implementing such solutions can help you accelerate ETL development and testing at your organization.

If you have questions or suggestions, please comment below.

 


Additional Reading

If you found this post useful, be sure to check out Implement Continuous Integration and Delivery of Apache Spark Applications using AWS and Build a Data Lake Foundation with AWS Glue and Amazon S3.

 


About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 
Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Announcing Coolest Projects North America

Post Syndicated from Courtney Lentz original https://www.raspberrypi.org/blog/coolest-projects-north-america/

The Raspberry Pi Foundation loves to celebrate people who use technology to solve problems and express themselves creatively, so we’re proud to expand the incredibly successful event Coolest Projects to North America. This free event will be held on Sunday 23 September 2018 at the Discovery Cube Orange County in Santa Ana, California.

Coolest Projects North America logo Raspberry Pi CoderDojo

What is Coolest Projects?

Coolest Projects is a world-leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. The event is both a competition and an exhibition to give young digital makers aged 7 to 17 a platform to celebrate their successes, creativity, and ingenuity.

showcase crowd — Coolest Projects North America

In 2012, Coolest Projects was conceived as an opportunity for CoderDojo Ninjas to showcase their work and for supporters to acknowledge these achievements. Week after week, Ninjas would meet up to work diligently on their projects, hacks, and code; however, it can be difficult for them to see their long-term progress on a project when they’re concentrating on its details on a weekly basis. Coolest Projects became a dedicated time each year for Ninjas and supporters to reflect, celebrate, and share both the achievements and challenges of the maker’s journey.

three female coolest projects attendees — Coolest Projects North America

Coolest Projects North America

Not only is Coolest Projects expanding to North America, it’s also expanding its participant pool! Members of our team have met so many amazing young people creating in all areas of the world, that it simply made sense to widen our outreach to include Code Clubs, students of Raspberry Pi Certified Educators, and members of the Raspberry Jam community at large as well as CoderDojo attendees.

 a boy showing a technology project to an old man, with a girl playing on a laptop on the floor — Coolest Projects North America

Exhibit and attend Coolest Projects

Coolest Projects is a free, family- and educator-friendly event. Young people can apply to exhibit their projects, and the general public can register to attend this one-day event. Be sure to register today, because you make Coolest Projects what it is: the coolest.

The post Announcing Coolest Projects North America appeared first on Raspberry Pi.

Cybersecurity Insurance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/cybersecurity_i_1.html

Good article about how difficult it is to insure an organization against Internet attacks, and how expensive the insurance is.

Companies like retailers, banks, and healthcare providers began seeking out cyberinsurance in the early 2000s, when states first passed data breach notification laws. But even with 20 years’ worth of experience and claims data in cyberinsurance, underwriters still struggle with how to model and quantify a unique type of risk.

“Typically in insurance we use the past as prediction for the future, and in cyber that’s very difficult to do because no two incidents are alike,” said Lori Bailey, global head of cyberrisk for the Zurich Insurance Group. Twenty years ago, policies dealt primarily with data breaches and third-party liability coverage, like the costs associated with breach class-action lawsuits or settlements. But more recent policies tend to accommodate first-party liability coverage, including costs like online extortion payments, renting temporary facilities during an attack, and lost business due to systems failures, cloud or web hosting provider outages, or even IT configuration errors.

In my new book — out in September — I write:

There are challenges to creating these new insurance products. There are two basic models for insurance. There’s the fire model, where individual houses catch on fire at a fairly steady rate, and the insurance industry can calculate premiums based on that rate. And there’s the flood model, where an infrequent large-scale event affects large numbers of people — but again at a fairly steady rate. Internet+ insurance is complicated because it follows neither of those models but instead has aspects of both: individuals are hacked at a steady (albeit increasing) rate, while class breaks and massive data breaches affect lots of people at once. Also, the constantly changing technology landscape makes it difficult to gather and analyze the historical data necessary to calculate premiums.

BoingBoing article.

Publisher Gets Carte Blanche to Seize New Sci-Hub Domains

Post Syndicated from Ernesto original https://torrentfreak.com/publisher-gets-carte-blanche-to-seize-new-sci-hub-domains-180410/

While Sci-Hub is loved by thousands of researchers and academics around the world, copyright holders are doing everything in their power to wipe if off the web.

Following a $15 million defeat against Elsevier last June, the American Chemical Society (ACS) won a default judgment of $4.8 million in copyright damages a few months later.

The publisher was further granted a broad injunction, requiring various third-party services to stop providing access to the site. This includes domain registries, hosting companies and search engines.

Soon after the order was signed, several of Sci-Hub’s domain names became unreachable as domain registries and Cloudflare complied with the court order. Still, Sci-Hub remained available all this time, with help from several newly registered domain names.

Frustrated by Sci-Hub’s resilience, ACS recently went back to court asking for an amended injunction. The publisher requested the authority to seize any and all Sci-Hub domain names, also those that will be registered in the future.

“Plaintiff has been forced to engage in a game of ‘whac-a-mole’ whereby new ‘sci-hub’ domain names emerge,” ACS informed the court.

“Further complicating matters, some registries, registrars, and Internet service providers have refused to disable newer Sci-Hub domain names that were not specifically identified in the Complaint or the injunction”

Soon after the request was submitted, US District Court Judge Leonie Brinkema agreed to the amended language.

The amended injunction now requires search engines, hosting companies, domain registrars, and other service or software providers, to cease facilitating access to Sci-Hub. This includes, but is not limited to, the following domain names.

‘sci-hub.ac, scihub.biz, sci-hub.bz, sci-hub.cc, sci-hub.cf, sci-hub.cn, sci-hub.ga, sci-hub.gq, scihub.hk, sci-hub.is, sci-hub.la, sci-hub.name, sci-hub.nu, sci-hub.nz, sci-hub.onion, scihub22266oqcxt.onion, sci-hub.tw, and sci-hub.ws.’

From the injunction

The new injunction makes ACS’ enforcement efforts much more effective. It effectively means that third-party services can no longer refuse to comply because a Sci-Hub domain is not listed in the complaint or injunction.

This already appears to have had some effect, as several domain names including sci-hub.la and sci-hub.tv became inaccessible soon after the paperwork was signed. Still, it is unlikely that it will help to shut down the site completely.

Several service providers are not receptive to US Court orders. One example is Iceland’s domain registry ISNIC and indeed, at the time of writing, Sci-Hub.is is still widely available.

Seizing .onion domain names, which are used on the Tor network, may also prove to be a challenge. After all, there is no central registration organization involved.

For now, Sci-Hub founder and operator Alexandra Elbakyan appears determined to keep the site online, whatever it takes. While it may be a hassle for users to find the latest working domain names, the new court order is not the end of the “whac-a-mole” just yet.

A copy of the amended injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

American Public Television Embraces the Cloud — And the Future

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/american-public-television-embraces-the-cloud-and-the-future/

American Public Television website

American Public Television was like many organizations that have been around for a while. They were entrenched using an older technology — in their case, tape storage and distribution — that once met their needs but was limiting their productivity and preventing them from effectively collaborating with their many media partners. APT’s VP of Technology knew that he needed to move into the future and embrace cloud storage to keep APT ahead of the game.
Since 1961, American Public Television (APT) has been a leading distributor of groundbreaking, high-quality, top-rated programming to the nation’s public television stations. Gerry Field is the Vice President of Technology at APT and is responsible for delivering their extensive program catalog to 350+ public television stations nationwide.

In the time since Gerry  joined APT in 2007, the industry has been in digital overdrive. During that time APT has continued to acquire and distribute the best in public television programming to their technically diverse subscribers.

This created two challenges for Gerry. First, new technology and format proliferation were driving dramatic increases in digital storage. Second, many of APT’s subscribers struggled to keep up with the rapidly changing industry. While some subscribers had state-of-the-art satellite systems to receive programming, others had to wait for the post office to drop off programs recorded on tape weeks earlier. With no slowdown on the horizon of innovation in the industry, Gerry knew that his storage and distribution systems would reach a crossroads in no time at all.

American Public Television logo

Living the tape paradigm

The digital media industry is only a few years removed from its film, and later videotape, roots. Tape was the input and the output of the industry for many years. As a consequence, the tools and workflows used by the industry were built and designed to work with tape. Over time, the “file” slowly replaced the tape as the object to be captured, edited, stored and distributed. Trouble was, many of the systems and more importantly workflows were based on processing tape, and these have proven to be hard to change.

At APT, Gerry realized the limits of the tape paradigm and began looking for technologies and solutions that enabled workflows based on file and object based storage and distribution.

Thinking file based storage and distribution

For data (digital media) storage, APT, like everyone else, started by installing onsite storage servers. As the amount of digital data grew, more storage was added. In addition, APT was expanding its distribution footprint by creating or partnering with distribution channels such as CreateTV and APT Worldwide. This dramatically increased the number of programming formats and the amount of data that had to be stored. As a consequence, updating, maintaining, and managing the APT storage systems was becoming a major challenge and a major resource hog.

APT Online

Knowing that his in-house storage system was only going to cost more time and money, Gerry decided it was time to look at cloud storage. But that wasn’t the only reason he looked at the cloud. While most people consider cloud storage as just a place to back up and archive files, Gerry was envisioning how the ubiquity of the cloud could help solve his distribution challenges. The trouble was the price of cloud storage from vendors like Amazon S3 and Microsoft Azure was a non-starter, especially for a non-profit. Then Gerry came across Backblaze. B2 Cloud Storage service met all of his performance requirements, and at $0.005/GB/month for storage and $0.01/GB for downloads it was nearly 75% less than S3 or Azure.

Gerry did the math and found that he could economically incorporate B2 Cloud Storage into his IT portfolio, using it for both program submission and for active storage and archiving of the APT programs. In addition, B2 now gives him the foundation necessary to receive and distribute programming content over the Internet. This is especially useful for organizations that can’t conveniently access satellite distribution systems. Not to mention downloading from the cloud is much faster than sending a tape through the mail.

Adding B2 Cloud Storage to their infrastructure has helped American Public Television address two key challenges. First, they now have “unlimited” storage in the cloud without having to add any hardware. In addition, with B2, they only pay for the storage they use. That means they don’t have to buy storage upfront trying to match the maximum amount of storage they’ll ever need. Second, by using B2 as a distribution source for their programming APT subscribers, especially the smaller and remote ones, can get content faster and more reliably without having to perform costly upgrades to their infrastructure.

The road ahead

As APT gets used to their file based infrastructure and workflow, there are a number of cost saving and income generating ideas they are pondering which are now worth considering. Here are a few:

Program Submissions — New content can be uploaded from anywhere using a web browser, an Internet connection, and a login. For example, a producer in Cambodia can upload their film to B2. From there the film is downloaded to an in-house system where it is processed and transcoded using compute. The finished film is added to the APT catalog and added to B2. Once there, the program is instantly available for subscribers to order and download.

“The affordability and performance of Backblaze B2 is what allowed us to make the B2 cloud part of the APT data storage and distribution strategy into the future.” — Gerry Field

Easier Previews — At any time, work in process or finished programs can be made available for download from the B2 cloud. One place this could be useful is where a subscriber needs to review a program to comply with local policies and practices before airing. In the old system, each “one-off” was a time consuming manual process.

Instant Subscriptions — There are many organizations such as schools and businesses that want to use just one episode of a desired show. With an e-commerce based website, current or even archived programming kept in B2 could be available to download or stream for a minimal charge.

At APT there were multiple technologies needed to make their file-based infrastructure work, but as Gerry notes, having an affordable, trustworthy, cloud storage service like B2 is one of the critical building blocks needed to make everything work together.

The post American Public Television Embraces the Cloud — And the Future appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Fox Networks Obtains Piracy Blocking Injunction Against Rojadirecta

Post Syndicated from Andy original https://torrentfreak.com/fox-networks-obtains-piracy-blocking-injunction-against-rojadirecta-180405/

Twelve years ago this October, a court in Denmark ordered a local ISP to begin blocking unlicensed Russian music site AllofMP3. It was a landmark moment that opened the floodgates.

Although most countries took a few years to follow, blocking is now commonplace across Europe and if industry lobbyists have their way, it will soon head to North America. Meanwhile, other regions are getting their efforts underway, with Uruguay the latest country to reserve a place on the list.

The news comes via Fox Sports Latin America, which expressed satisfaction this week that a court in the country had handed down an interim injunction against local ISPs which compels them to block access to streaming portal Rojadirecta.

Despite a focus on Spanish speaking regions, Rojadirecta is one of the best known and longest-standing unauthorized sports in the world. Offering links to live streams of most spectator sports, Rojadirecta has gained a loyal and international following.

This has resulted in a number of lawsuits and legal challenges in multiple regions, the latest being a criminal copyright infringement complaint by Fox Sports Latin America. As usual, the company is annoyed that its content is being made available online without the proper authorization.

“This exemplary ruling marks the beginning of judicial awareness on online piracy issues,” said Daniel Steinmetz, Chief Anti-Piracy Officer of Fox Networks Group Latin America.

“FNG Latin America works constantly to combat the illegal use of content on different fronts and with great satisfaction we have found in Uruguay an important ally in the fight against this scourge. We are on our way to ending the impunity of these illegal content relay sites.”

Fox Sports says that with this pioneering action, Uruguay is now at the forefront of the campaign to tackle piracy currently running rampant across South America.

According to a NetNames report, there are 222 million Internet users in the region, of which 110 million access pirated content. This translates to 1,377 million TV hours per year but it’s hoped that additional action in other countries will help to stem the rising tide.

“We have already presented actions in other countries in the region where we will seek to replicate what we have obtained in Uruguay,” Fox said in a statement.

Local reports indicate that Internet providers have not yet taken action to block RojaDirecta but it’s expected they will do so in the near future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Backblaze Announces B2 Compute Partnerships

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/introducing-cloud-compute-services/

Backblaze Announces B2 Compute Partnerships

In 2015, we announced Backblaze B2 Cloud Storage — the most affordable, high performance storage cloud on the planet. The decision to release B2 as a service was in direct response to customers asking us if they could use the same cloud storage infrastructure we use for our Computer Backup service. With B2, we entered a market in direct competition with Amazon S3, Google Cloud Services, and Microsoft Azure Storage. Today, we have over 500 petabytes of data from customers in over 150 countries. At $0.005 / GB / month for storage (1/4th of S3) and $0.01 / GB for downloads (1/5th of S3), it turns out there’s a healthy market for cloud storage that’s easy and affordable.

As B2 has grown, customers wanted to use our cloud storage for a variety of use cases that required not only storage but compute. We’re happy to say that through partnerships with Packet & ServerCentral, today we’re announcing that compute is now available for B2 customers.

Cloud Compute and Storage

Backblaze has directly connected B2 with the compute servers of Packet and ServerCentral, thereby allowing near-instant (< 10 ms) data transfers between services. Also, transferring data between B2 and both our compute partners is free.

  • Storing data in B2 and want to run an AI analysis on it? — There are no fees to move the data to our compute partners.
  • Generating data in an application? — Run the application with one of our partners and store it in B2.
  • Transfers are free and you’ll save more than 50% off of the equivalent set of services from AWS.

These partnerships enable B2 customers to use compute, give our compute partners’ customers access to cloud storage, and introduce new customers to industry-leading storage and compute — all with high-performance, low-latency, and low-cost.

Is This a Big Deal? We Think So

Compute is one of the most requested services from our customers Why? Because it unlocks a number of use cases for them. Let’s look at three popular examples:

Transcoding Media Files

B2 has earned wide adoption in the Media & Entertainment (“M&E”) industry. Our affordable storage and download pricing make B2 great for a wide variety of M&E use cases. But many M&E workflows require compute. Content syndicators, like American Public Television, need the ability to transcode files to meet localization and distribution management requirements.

There are a multitude of reasons that transcode is needed — thumbnail and proxy generation enable M&E professionals to work efficiently. Without compute, the act of transcoding files remains cumbersome. Either the files need to be brought down from the cloud, transcoded, and then pushed back up or they must be kept locally until the project is complete. Both scenarios are inefficient.

Starting today, any content producer can spin up compute with one of our partners, pay by the hour for their transcode processing, and return the new media files to B2 for storage and distribution. The company saves money, moves faster, and ensures their files are safe and secure.

Disaster Recovery

Backblaze’s heritage is based on providing outstanding backup services. When you have incredibly affordable cloud storage, it ends up being a great destination for your backup data.

Most enterprises have virtual machines (“VMs”) running in their infrastructure and those VMs need to be backed up. In a disaster scenario, a business wants to know they can get back up and running quickly.

With all data stored in B2, a business can get up and running quickly. Simply restore your backed up VM to one of our compute providers, and your business will be able to get back online.

Since B2 does not place restrictions, delays, or penalties on getting data out, customers can get back up and running quickly and affordably.

Saving $74 Million (aka “The Dropbox Effect”)

Ten years ago, Backblaze decided that S3 was too costly a platform to build its cloud storage business. Instead, we created the Backblaze Storage Pod and our own cloud storage infrastructure. That decision enabled us to offer our customers storage at a previously unavailable price point and maintain those prices for over a decade. It also laid the foundation for Netflix Open Connect and Facebook Open Compute.

Dropbox recently migrated the majority of their cloud services off of AWS and onto Dropbox’s own infrastructure. By leaving AWS, Dropbox was able to build out their own data centers and still save over $74 Million. They achieved those savings by avoiding the fees AWS charges for storing and downloading data, which, incidentally, are five times higher than Backblaze B2.

For Dropbox, being able to realize savings was possible because they have access to enough capital and expertise that they can build out their own infrastructure. For companies that have such resources and scale, that’s a great answer.

“Before this offering, the economics of the cloud would have made our business simply unviable.” — Gabriel Menegatti, SlicingDice

The questions Backblaze and our compute partners pondered was “how can we democratize the Dropbox effect for our storage and compute customers? How can we help customers do more and pay less?” The answer we came up with was to connect Backblaze’s B2 storage with strategic compute partners and remove any transfer fees between them. You may not save $74 million as Dropbox did, but you can choose the optimal providers for your use case and realize significant savings in the process.

This Sounds Good — Tell Me More About Your Partners

We’re very fortunate to be launching our compute program with two fantastic partners in Packet and ServerCentral. These partners allow us to offer a range of computing services.

Packet

We recommend Packet for customers that need on-demand, high performance, bare metal servers available by the hour. They also have robust offerings for private / customized deployments. Their offerings end up costing 50-75% of the equivalent offerings from EC2.

To get started with Packet and B2, visit our partner page on Packet.net.

ServerCentral

ServerCentral is the right partner for customers that have business and IT challenges that require more than “just” hardware. They specialize in fully managed, custom cloud solutions that solve complex business and IT challenges. ServerCentral also has expertise in managed network solutions to address global connectivity and content delivery.

To get started with ServerCentral and B2, visit our partner page on ServerCentral.com.

What’s Next?

We’re excited to find out. The combination of B2 and compute unlocks use cases that were previously impossible or at least unaffordable.

“The combination of performance and price offered by this partnership enables me to create an entirely new business line. Before this offering, the economics of the cloud would have made our business simply unviable,” noted Gabriel Menegatti, co-founder at SlicingDice, a serverless data warehousing service. “Knowing that transfers between compute and B2 are free means I don’t have to worry about my business being successful. And, with download pricing from B2 at just $0.01 GB, I know I’m avoiding a 400% tax from AWS on data I retrieve.”

What can you do with B2 & compute? Please share your ideas with us in the comments. And, for those attending NAB 2018 in Las Vegas next week, please come by and say hello!

The post Backblaze Announces B2 Compute Partnerships appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

MPAA Aims to Prevent Piracy Leaks With New Security Program

Post Syndicated from Andy original https://torrentfreak.com/mpaa-aims-to-prevent-piracy-leaks-with-new-security-program-180403/

When movies and TV shows leak onto the Internet in advance of their intended release dates, it’s generally a time of celebration for pirates.

Grabbing a workprint or DVD screener of an Oscar nominee or a yet to be aired on TV show makes the Internet bubble with excitement. But for the studios and companies behind the products, it presents their worst nightmare.

Despite all the takedown efforts known to man, once content appears, there’s no putting the genie back into the bottle.

With this in mind, the solution doesn’t lie with reactionary efforts such as Internet disconnections, site-blocking and similar measures, but better hygiene while content is still in production or being prepared for distribution. It’s something the MPAA hopes to address with a brand new program designed to bring the security of third-party vendors up to scratch.

The Trusted Partner Network (TPN) is the brainchild of the MPAA and the Content Delivery & Security Association (CDSA), a worldwide forum advocating the innovative and responsible delivery and storage of entertainment content.

TPN is being touted as a global industry-wide film and television content protection initiative which will help companies prevent leaks, breaches, and hacks of their customers’ movies and television shows prior to their intended release.

“Content is now created by a growing ecosystem of third-party vendors, who collaborate with varying degrees of security,” TPN explains.

“This has escalated the security threat to the entertainment industry’s most prized asset, its content. The TPN program seeks to raise security awareness, preparedness, and capabilities within our industry.”

The TPN will establish a “single benchmark of minimum security preparedness” for vendors whose details will be available via centralized and global “trusted partner” database. The TPN will replace security assessments programs already in place at the MPAA and CDSA.

While content owners and vendors are still able to conduct their own security assessments on an “as-needed” basis, the aim is for the TPN to reduce the number of assessments carried out while assisting in identifying vulnerabilities. The pool of “trusted partners” is designed to help all involved understand and meet the challenges of leaks, whether that’s movie, TV show, or associated content.

While joining the TPN program is voluntary, there’s a strong suggestion that becoming involved in the program is in vendors’ best interests. Being able to carry the TPN logo will be an asset to doing business with others involved in the scheme, it’s suggested.

Once in, vendors will need to hire a TPN-approved assessor to carry out an initial audit of their supply chain and best practices, which in turn will need to be guided by the MPAA’s existing content security guidelines.

“Vendors will hire a Qualified Assessor from the TPN database and will schedule their assessment and manage the process via the secure online platform,” TPN says, noting that vendors will cover their own costs unless an assessment is carried out at the request of a content owner.

The TPN explains that members of the scheme aren’t passed or failed in respect of their security preparedness. However, there’s an expectation they will be expected to come up to scratch and prove that with a subsequent positive report from a TPN approved assessor. Assessors themselves will also be assessed via the TPN Qualified Assessor Program.

By imposing MPAA best practices upon partner companies, it’s hoped that some if not all of the major leaks that have plagued the industry over the past several years will be prevented in future. Whether that’s the usual DVD screener leaks, workprints, scripts or other content, it’s believed the TPN should be able to help in some way, although the former might be a more difficult nut to crack.

There’s no doubting that the problem TPN aims to address is serious. In 2017 alone, hackers and other individuals obtained and then leaked episodes of Orange is the New Black, unreleased ABC content, an episode of Game of Thrones sourced from India and scripts from the same show. Even blundering efforts managed to make their mark.

“Creating the films and television shows enjoyed by audiences around the world increasingly requires a network of specialized vendors and technicians,” says MPAA chairman and CEO Charles Rivkin.

“That’s why maintaining high security standards for all third-party operations — from script to screen — is such an important part of preventing the theft of creative works and ultimately protects jobs and the health of our vibrant creative economy.”

According to TPN, the first class of TPN Assessors was recruited and tested last month while beta-testing of key vendors will begin in April. The full program will roll out in June 2018.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Innovation Flywheels and the AWS Serverless Application Repository

Post Syndicated from Tim Wagner original https://aws.amazon.com/blogs/compute/innovation-flywheels-and-the-aws-serverless-application-repository/

At AWS, our customers have always been the motivation for our innovation. In turn, we’re committed to helping them accelerate the pace of their own innovation. It was in the spirit of helping our customers achieve their objectives faster that we launched AWS Lambda in 2014, eliminating the burden of server management and enabling AWS developers to focus on business logic instead of the challenges of provisioning and managing infrastructure.

 

In the years since, our customers have built amazing things using Lambda and other serverless offerings, such as Amazon API Gateway, Amazon Cognito, and Amazon DynamoDB. Together, these services make it easy to build entire applications without the need to provision, manage, monitor, or patch servers. By removing much of the operational drudgery of infrastructure management, we’ve helped our customers become more agile and achieve faster time-to-market for their applications and services. By eliminating cold servers and cold containers with request-based pricing, we’ve also eliminated the high cost of idle capacity and helped our customers achieve dramatically higher utilization and better economics.

After we launched Lambda, though, we quickly learned an important lesson: A single Lambda function rarely exists in isolation. Rather, many functions are part of serverless applications that collectively deliver customer value. Whether it’s the combination of event sources and event handlers, as serverless web apps that combine APIs with functions for dynamic content with static content repositories, or collections of functions that together provide a microservice architecture, our customers were building and delivering serverless architectures for every conceivable problem. Despite the economic and agility benefits that hundreds of thousands of AWS customers were enjoying with Lambda, we realized there was still more we could do.

How Customer Feedback Inspired Us to Innovate

We heard from our customers that getting started—either from scratch or when augmenting their implementation with new techniques or technologies—remained a challenge. When we looked for serverless assets to share, we found stellar examples built by serverless pioneers that represented a multitude of solutions across industries.

There were apps to facilitate monitoring and logging, to process image and audio files, to create Alexa skills, and to integrate with notification and location services. These apps ranged from “getting started” examples to complete, ready-to-run assets. What was missing, however, was a unified place for customers to discover this diversity of serverless applications and a step-by-step interface to help them configure and deploy them.

We also heard from customers and partners that building their own ecosystems—ecosystems increasingly composed of functions, APIs, and serverless applications—remained a challenge. They wanted a simple way to share samples, create extensibility, and grow consumer relationships on top of serverless approaches.

 

We built the AWS Serverless Application Repository to help solve both of these challenges by offering publishers and consumers of serverless apps a simple, fast, and effective way to share applications and grow user communities around them. Now, developers can easily learn how to apply serverless approaches to their implementation and business challenges by discovering, customizing, and deploying serverless applications directly from the Serverless Application Repository. They can also find libraries, components, patterns, and best practices that augment their existing knowledge, helping them bring services and applications to market faster than ever before.

How the AWS Serverless Application Repository Inspires Innovation for All Customers

Companies that want to create ecosystems, share samples, deliver extensibility and customization options, and complement their existing SaaS services use the Serverless Application Repository as a distribution channel, producing apps that can be easily discovered and consumed by their customers. AWS partners like HERE have introduced their location and transit services to thousands of companies and developers. Partners like Datadog, Splunk, and TensorIoT have showcased monitoring, logging, and IoT applications to the serverless community.

Individual developers are also publishing serverless applications that push the boundaries of innovation—some have published applications that leverage machine learning to predict the quality of wine while others have published applications that monitor crypto-currencies, instantly build beautiful image galleries, or create fast and simple surveys. All of these publishers are using serverless apps, and the Serverless Application Repository, as the easiest way to share what they’ve built. Best of all, their customers and fellow community members can find and deploy these applications with just a few clicks in the Lambda console. Apps in the Serverless Application Repository are free of charge, making it easy to explore new solutions or learn new technologies.

Finally, we at AWS continue to publish apps for the community to use. From apps that leverage Amazon Cognito to sync user data across applications to our latest collection of serverless apps that enable users to quickly execute common financial calculations, we’re constantly looking for opportunities to contribute to community growth and innovation.

At AWS, we’re more excited than ever by the growing adoption of serverless architectures and the innovation that services like AWS Lambda make possible. Helping our customers create and deliver new ideas drives us to keep inventing ways to make building and sharing serverless apps even easier. As the number of applications in the Serverless Application Repository grows, so too will the innovation that it fuels for both the owners and the consumers of those apps. With the general availability of the Serverless Application Repository, our customers become more than the engine of our innovation—they become the engine of innovation for one another.

To browse, discover, deploy, and publish serverless apps in minutes, visit the Serverless Application Repository. Go serverless—and go innovate!

Dr. Tim Wagner is the General Manager of AWS Lambda and Amazon API Gateway.

Alex’s quick and easy digital making Easter egg hunt

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/alexs-easter-egg-hunt/

Looking to incorporate some digital making into your Easter weekend? You’ve come to the right place! With a Raspberry Pi, a few wires, and some simple code, you can take your festivities to the next level — here’s how!

Easter Egg Hunt using Raspberry Pi

If you logged in to watch our Instagram live-stream yesterday, you’ll have seen me put together a simple egg carton and some wires to create circuits. These circuits, when closed by way of a foil-wrapped chocolate egg, instruct a Raspberry Pi to reveal the whereabouts of a larger chocolate egg!

Make it

You’ll need an egg carton, two male-to-female jumper wire, and two crocodile leads for each egg you use.

Easter Egg Hunt using Raspberry Pi

Connect your leads together in pairs: one end of a crocodile lead to the male end of one jumper wire. Attach the free crocodile clips of two leads to each corner of the egg carton (as shown up top). Then hook up the female ends to GPIO pins: one numbered pin and one ground pin per egg. I recommend pins 3, 4, 18 and 24, as they all have adjacent GND pins.

Easter Egg Hunt using Raspberry Pi

Your foil-wrapped Easter egg will complete the circuit — make sure it’s touching both the GPIO- and GND-connected clips when resting in the carton.

Easter Egg Hunt using Raspberry Pi

Wrap it

For your convenience (and our sweet tooth), we tested several foil-wrapped eggs (Easter and otherwise) to see which are conductive.

Raspberry Pi on Twitter

We’re egg-sperimenting with Easter deliciousness to find which treat is the most conductive. Why? All will be revealed in our Instagram Easter live-stream tomorrow.

The result? None of them are! But if you unwrap an egg and rewrap it with the non-decorative foil side outward, this tends to work. You could also use aluminium foil or copper tape to create a conductive layer.

Code it

Next, you’ll need to create the code for your hunt. The script below contains the bare bones needed to make the project work — you can embellish it however you wish using GUIs, flashing LEDs, music, etc.

Open Thonny or IDLE on Raspbian and create a new file called egghunt.py. Then enter the following code:

We’re using ButtonBoard from the gpiozero library. This allows us to link several buttons together as an object and set an action for when any number of the buttons are pressed. Here, the script waits for all four circuits to be completed before printing the location of the prize in the Python shell.

Your turn

And that’s it! Now you just need to hide your small foil eggs around the house and challenge your kids/friends/neighbours to find them. Then, once every circuit is completed with an egg, the great prize will be revealed.

Give it a go this weekend! And if you do, be sure to let us know on social media.

(Thank you to Lauren Hyams for suggesting we “do something for Easter” and Ben ‘gpiozero’ Nuttall for introducing me to ButtonBoard.)

The post Alex’s quick and easy digital making Easter egg hunt appeared first on Raspberry Pi.

Our 2017 Annual Review

Post Syndicated from Oliver Quinlan original https://www.raspberrypi.org/blog/annual-review-2017/

Each year we take stock at the Raspberry Pi Foundation, looking back at what we’ve achieved over the previous twelve months. We’ve just published our Annual Review for 2017, reflecting on the progress we’ve made as a foundation and a community towards putting the power of digital making in the hands of people all over the world.

In the review, you can find out about all the different education programmes we run. Moreover, you can hear from people who have taken part, learned through making, and discovered they can do things with technology that they never thought they could.

Growing our reach

Our reach grew hugely in 2017, and the numbers tell this story.

By the end of 2017, we’d sold over 17 million Raspberry Pi computers, bringing tools for learning programming and physical computing to people all over the world.

Vibrant learning and making communities

Code Club grew by 2964 clubs in 2017, to over 10000 clubs across the world reaching over 150000 9- to 13-year-olds.

“The best moment is seeing a child discover something for the first time. It is amazing.”
– Code Club volunteer

In 2017 CoderDojo became part of the Raspberry Pi family. Over the year, it grew by 41% to 1556 active Dojos, involving nearly 40000 7- to 17-year-olds in creating with code and collaborating to learn about technology.

Raspberry Jams continued to grow, with 18700 people attending events organised by our amazing community members.



Supporting teaching and learning

We reached 208 projects in our online resources in 2017, and 8.5 million people visited these to get making.

“I like coding because it’s like a whole other language that you have to learn, and it creates something very interesting in the end.”
– Betty, Year 10 student

2017 was also the year we began offering online training courses. 19000 people joined us to learn about programming, physical computing, and running a Code Club.



Over 6800 young people entered Mission Zero and Mission Space Lab, 2017’s two Astro Pi challenges. They created code that ran on board the International Space Station or will run soon.

More than 600 educators joined our face-to-face Picademy training last year. Our community of Raspberry Pi Certified Educators grew to 1500, all leading digital making across schools, libraries, and other settings where young people learn.

Being social

Well over a million people follow us on social media, and in 2017 we’ve seen big increases in our YouTube and Instagram followings. We have been creating much more video content to share what we do with audiences on these and other social networks.

The future

It’s been a big year, as we continue to reach even more people. This wouldn’t be possible without the amazing work of volunteers and community members who do so much to create opportunities for others to get involved. Behind each of these numbers is a person discovering digital making for the first time, learning new skills, or succeeding with a project that makes a difference to something they care about.

You can read our 2017 Annual Review in full over on our About Us page.

The post Our 2017 Annual Review appeared first on Raspberry Pi.

How The Pirate Bay Helped Spotify Become a Success

Post Syndicated from Ernesto original https://torrentfreak.com/how-the-pirate-bay-helped-spotify-become-a-success-180319/

When Spotify launched its first beta in the fall of 2008, many people were blown away by its ease of use.

With the option to stream millions of tracks supported by an occasional ad, or free of ads for a small subscription fee, Spotify offered something that was more convenient than piracy.

In the years that followed, Spotify rolled out its music service in more than 60 countries, amassing over 160 million users. While the service is often billed as a piracy killer, ironically, it also owes its success to piracy.

As a teenager, Spotify founder and CEO Daniel Ek was fascinated by Napster, which triggered a piracy revolution in the late nineties. Napster made all the music in the world accessible in a few clicks, something Spotify also set out to do a few years later, legally.

“I want to replicate my first experience with piracy,” Ek told Businessweek years ago. “What eventually killed it was that it didn’t work for the people participating with the content. The challenge here is about solving both of those things.”

While the technical capabilities were certainly there, the main stumbling block was getting the required licenses. The music industry hadn’t had a lot of good experiences with the Internet a decade ago so there was plenty of hesitation.

The same was true of Sweden, where The Pirate Bay had just gained a lot of traction. There was a pro-sharing culture being cultivated by Piratbyrån, Swedish for the Piracy Bureau, which was the driving force behind the torrent site in the early days.

After the first Pirate Bay raid in 2006, thousands of people gathered in the streets of Stockholm to declare their support for the site and their right to share.

Pro-piracy protest in Stockholm (Jon Åslund, CC BY 2.5)

Interestingly, however, this pro-piracy climate turned out to be in Spotify’s favor. In a detailed feature in the Swedish newspaper Breakit Per Sundin, CEO of Sony BMG at the time, suggests that The Pirate Bay helped Spotify.

“If Pirate Bay had not existed or made such a mess in the market, I don’t think Spotify would have seen the light of the day. You wouldn’t get the licenses you wanted,” Sundin said.

With music industry revenues dropping, record labels had to fire hundreds of people. They were becoming desperate and were looking for change, something Spotify was promising.

At the time, the idea of having millions of songs readily and legally available was totally new. Many immediately saw it as an “alternative to music piracy” and even Pirate Bay founder Peter Sunde was impressed.

“It was great. It was always what was missing in the pirate services, that intuitive interface,” Sunde told Breakit.

Sunde also believed that The Pirate Bay and all the buzz around piracy in Sweden was a great boon to Spotify. But while the latter turned into a billion-dollar business that’s about to go public, Sunde and the other TPB founders still owe the labels millions in damages.

“Without file-sharing, The Pirate Bay and the political work done by Piratbyrån, it was not possible to get the licensing agreements Spotify received,” Sunde said. “Sometimes I think I should have received 10, 20 or 30 percent of Spotify, as a thank you for the help.”

In addition to creating the right climate for the major record labels to get on board, The Pirate Bay also appears to have been of more practical assistance.

When Spotify first launched several people noticed that some tracks still had tags from pirate groups such as FairLight in the title. Those are not the files you expect the labels to offer, but files that were on The Pirate Bay.

Also, Spotify mysteriously offered music from a band that decided to share their music on The Pirate Bay, instead of the usual outlets. There’s only one place that could have originated from.

The Pirate Bay.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.