Tag Archives: accessibility

Bringing Datacenter-Scale Hardware-Software Co-design to the Cloud with FireSim and Amazon EC2 F1 Instances

Post Syndicated from Mia Champion original https://aws.amazon.com/blogs/compute/bringing-datacenter-scale-hardware-software-co-design-to-the-cloud-with-firesim-and-amazon-ec2-f1-instances/

The recent addition of Xilinx FPGAs to AWS Cloud compute offerings is one way that AWS is enabling global growth in the areas of advanced analytics, deep learning and AI. The customized F1 servers use pooled accelerators, enabling interconnectivity of up to 8 FPGAs, each one including 64 GiB DDR4 ECC protected memory, with a dedicated PCIe x16 connection. That makes this a powerful engine with the capacity to process advanced analytical applications at scale, at a significantly faster rate. For example, AWS commercial partner Edico Genome is able to achieve an approximately 30X speedup in analyzing whole genome sequencing datasets using their DRAGEN platform powered with F1 instances.

While the availability of FPGA F1 compute on-demand provides clear accessibility and cost advantages, many mainstream users are still finding that the “threshold to entry” in developing or running FPGA-accelerated simulations is too high. Researchers at the UC Berkeley RISE Lab have developed “FireSim”, powered by Amazon FPGA F1 instances as an open-source resource, FireSim lowers that entry bar and makes it easier for everyone to leverage the power of an FPGA-accelerated compute environment. Whether you are part of a small start-up development team or working at a large datacenter scale, hardware-software co-design enables faster time-to-deployment, lower costs, and more predictable performance. We are excited to feature FireSim in this post from Sagar Karandikar and his colleagues at UC-Berkeley.

―Mia Champion, Sr. Data Scientist, AWS

Mapping an 8-node FireSim cluster simulation to Amazon EC2 F1

As traditional hardware scaling nears its end, the data centers of tomorrow are trending towards heterogeneity, employing custom hardware accelerators and increasingly high-performance interconnects. Prototyping new hardware at scale has traditionally been either extremely expensive, or very slow. In this post, I introduce FireSim, a new hardware simulation platform under development in the computer architecture research group at UC Berkeley that enables fast, scalable hardware simulation using Amazon EC2 F1 instances.

FireSim benefits both hardware and software developers working on new rack-scale systems: software developers can use the simulated nodes with new hardware features as they would use a real machine, while hardware developers have full control over the hardware being simulated and can run real software stacks while hardware is still under development. In conjunction with this post, we’re releasing the first public demo of FireSim, which lets you deploy your own 8-node simulated cluster on an F1 Instance and run benchmarks against it. This demo simulates a pre-built “vanilla” cluster, but demonstrates FireSim’s high performance and usability.

Why FireSim + F1?

FPGA-accelerated hardware simulation is by no means a new concept. However, previous attempts to use FPGAs for simulation have been fraught with usability, scalability, and cost issues. FireSim takes advantage of EC2 F1 and open-source hardware to address the traditional problems with FPGA-accelerated simulation:
Problem #1: FPGA-based simulations have traditionally been expensive, difficult to deploy, and difficult to reproduce.
FireSim uses public-cloud infrastructure like F1, which means no upfront cost to purchase and deploy FPGAs. Developers and researchers can distribute pre-built AMIs and AFIs, as in this public demo (more details later in this post), to make experiments easy to reproduce. FireSim also automates most of the work involved in deploying an FPGA simulation, essentially enabling one-click conversion from new RTL to deploying on an FPGA cluster.

Problem #2: FPGA-based simulations have traditionally been difficult (and expensive) to scale.
Because FireSim uses F1, users can scale out experiments by spinning up additional EC2 instances, rather than spending hundreds of thousands of dollars on large FPGA clusters.

Problem #3: Finding open hardware to simulate has traditionally been difficult. Finding open hardware that can run real software stacks is even harder.
FireSim simulates RocketChip, an open, silicon-proven, RISC-V-based processor platform, and adds peripherals like a NIC and disk device to build up a realistic system. Processors that implement RISC-V automatically support real operating systems (such as Linux) and even support applications like Apache and Memcached. We provide a custom Buildroot-based FireSim Linux distribution that runs on our simulated nodes and includes many popular developer tools.

Problem #4: Writing hardware in traditional HDLs is time-consuming.
Both FireSim and RocketChip use the Chisel HDL, which brings modern programming paradigms to hardware description languages. Chisel greatly simplifies the process of building large, highly parameterized hardware components.

How to use FireSim for hardware/software co-design

FireSim drastically improves the process of co-designing hardware and software by acting as a push-button interface for collaboration between hardware developers and systems software developers. The following diagram describes the workflows that hardware and software developers use when working with FireSim.

Figure 2. The FireSim custom hardware development workflow.

The hardware developer’s view:

  1. Write custom RTL for your accelerator, peripheral, or processor modification in a productive language like Chisel.
  2. Run a software simulation of your hardware design in standard gate-level simulation tools for early-stage debugging.
  3. Run FireSim build scripts, which automatically build your simulation, run it through the Vivado toolchain/AWS shell scripts, and publish an AFI.
  4. Deploy your simulation on EC2 F1 using the generated simulation driver and AFI
  5. Run real software builds released by software developers to benchmark your hardware

The software developer’s view:

  1. Deploy the AMI/AFI generated by the hardware developer on an F1 instance to simulate a cluster of nodes (or scale out to many F1 nodes for larger simulated core-counts).
  2. Connect using SSH into the simulated nodes in the cluster and boot the Linux distribution included with FireSim. This distribution is easy to customize, and already supports many standard software packages.
  3. Directly prototype your software using the same exact interfaces that the software will see when deployed on the real future system you’re prototyping, with the same performance characteristics as observed from software, even at scale.

FireSim demo v1.0

Figure 3. Cluster topology simulated by FireSim demo v1.0.

This first public demo of FireSim focuses on the aforementioned “software-developer’s view” of the custom hardware development cycle. The demo simulates a cluster of 1 to 8 RocketChip-based nodes, interconnected by a functional network simulation. The simulated nodes work just like “real” machines:  they boot Linux, you can connect to them using SSH, and you can run real applications on top. The nodes can see each other (and the EC2 F1 instance on which they’re deployed) on the network and communicate with one another. While the demo currently simulates a pre-built “vanilla” cluster, the entire hardware configuration of these simulated nodes can be modified after FireSim is open-sourced.

In this post, I walk through bringing up a single-node FireSim simulation for experienced EC2 F1 users. For more detailed instructions for new users and instructions for running a larger 8-node simulation, see FireSim Demo v1.0 on Amazon EC2 F1. Both demos walk you through setting up an instance from a demo AMI/AFI and booting Linux on the simulated nodes. The full demo instructions also walk you through an example workload, running Memcached on the simulated nodes, with YCSB as a load generator to demonstrate network functionality.

Deploying the demo on F1

In this release, we provide pre-built binaries for driving simulation from the host and a pre-built AFI that contains the FPGA infrastructure necessary to simulate a RocketChip-based node.

Starting your F1 instances

First, launch an instance using the free FireSim Demo v1.0 product available on the AWS Marketplace on an f1.2xlarge instance. After your instance has booted, log in using the user name centos. On the first login, you should see the message “FireSim network config completed.” This sets up the necessary tap interfaces and bridge on the EC2 instance to enable communicating with the simulated nodes.

AMI contents

The AMI contains a variety of tools to help you run simulations and build software for RISC-V systems, including the riscv64 toolchain, a Buildroot-based Linux distribution that runs on the simulated nodes, and the simulation driver program. For more details, see the AMI Contents section on the FireSim website.

Single-node demo

First, you need to flash the FPGA with the FireSim AFI. To do so, run:

[[email protected]_ADDR ~]$ sudo fpga-load-local-image -S 0 -I agfi-00a74c2d615134b21

To start a simulation, run the following at the command line:

[[email protected]_ADDR ~]$ boot-firesim-singlenode

This automatically calls the simulation driver, telling it to load the Linux kernel image and root filesystem for the Linux distro. This produces output similar to the following:

Simulations Started. You can use the UART console of each simulated node by attaching to the following screens:

There is a screen on:

2492.fsim0      (Detached)

1 Socket in /var/run/screen/S-centos.

You could connect to the simulated UART console by connecting to this screen, but instead opt to use SSH to access the node instead.

First, ping the node to make sure it has come online. This is currently required because nodes may get stuck at Linux boot if the NIC does not receive any network traffic. For more information, see Troubleshooting/Errata. The node is always assigned the IP address 192.168.1.10:

[[email protected]_ADDR ~]$ ping 192.168.1.10

This should eventually produce the following output:

PING 192.168.1.10 (192.168.1.10) 56(84) bytes of data.

From 192.168.1.1 icmp_seq=1 Destination Host Unreachable

64 bytes from 192.168.1.10: icmp_seq=1 ttl=64 time=2017 ms

64 bytes from 192.168.1.10: icmp_seq=2 ttl=64 time=1018 ms

64 bytes from 192.168.1.10: icmp_seq=3 ttl=64 time=19.0 ms

At this point, you know that the simulated node is online. You can connect to it using SSH with the user name root and password firesim. It is also convenient to make sure that your TERM variable is set correctly. In this case, the simulation expects TERM=linux, so provide that:

[[email protected]_ADDR ~]$ TERM=linux ssh [email protected]

The authenticity of host ‘192.168.1.10 (192.168.1.10)’ can’t be established.

ECDSA key fingerprint is 63:e9:66:d0:5c:06:2c:1d:5c:95:33:c8:36:92:30:49.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘192.168.1.10’ (ECDSA) to the list of known hosts.

[email protected]’s password:

#

At this point, you’re connected to the simulated node. Run uname -a as an example. You should see the following output, indicating that you’re connected to a RISC-V system:

# uname -a

Linux buildroot 4.12.0-rc2 #1 Fri Aug 4 03:44:55 UTC 2017 riscv64 GNU/Linux

Now you can run programs on the simulated node, as you would with a real machine. For an example workload (running YCSB against Memcached on the simulated node) or to run a larger 8-node simulation, see the full FireSim Demo v1.0 on Amazon EC2 F1 demo instructions.

Finally, when you are finished, you can shut down the simulated node by running the following command from within the simulated node:

# poweroff

You can confirm that the simulation has ended by running screen -ls, which should now report that there are no detached screens.

Future plans

At Berkeley, we’re planning to keep improving the FireSim platform to enable our own research in future data center architectures, like FireBox. The FireSim platform will eventually support more sophisticated processors, custom accelerators (such as Hwacha), network models, and peripherals, in addition to scaling to larger numbers of FPGAs. In the future, we’ll open source the entire platform, including Midas, the tool used to transform RTL into FPGA simulators, allowing users to modify any part of the hardware/software stack. Follow @firesimproject on Twitter to stay tuned to future FireSim updates.

Acknowledgements

FireSim is the joint work of many students and faculty at Berkeley: Sagar Karandikar, Donggyu Kim, Howard Mao, David Biancolin, Jack Koenig, Jonathan Bachrach, and Krste Asanović. This work is partially funded by AWS through the RISE Lab, by the Intel Science and Technology Center for Agile HW Design, and by ASPIRE Lab sponsors and affiliates Intel, Google, HPE, Huawei, NVIDIA, and SK hynix.

Now Available – Amazon Aurora with PostgreSQL Compatibility

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-aurora-with-postgresql-compatibility/

Late last year I told you about our plans to add PostgreSQL compatibility to Amazon Aurora. We launched the private beta shortly after that announcement, and followed it up earlier this year with an open preview. We’ve received lots of great feedback during the beta and the preview and have done our best to make sure that the product meets your needs and exceeds your expectations!

Now Generally Available
I am happy to report that Amazon Aurora with PostgreSQL Compatibility is now generally available and that you can use it today in four AWS Regions, with more to follow. It is compatible with PostgreSQL 9.6.3 and scales automatically to support up to 64 TB of storage, with 6-way replication behind the scenes to improve performance and availability.

Just like Amazon Aurora with MySQL compatibility, this edition is fully managed and is very easy to set up and to use. On the performance side, you can expect up to 3x the throughput that you’d get if you ran PostgreSQL on your own (you can read Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases to learn more about how we did this).

You can launch a PostgreSQL-compatible Amazon Aurora instance from the RDS Console by selecting Amazon Aurora as the engine and PostgreSQL-compatible as the edition, and clicking on Next:

Then choose your instance class, single or Multi-AZ deployment (good for dev/test and production, respectively), set the instance name, and the administrator credentials, and click on Next:

You can choose between six instance classes (2 to 64 vCPUs and 15.25 to 488 GiB of memory):

The db.r4 instance class is new addition to Aurora and to RDS, and gives you an additional size at the top-end. The db.r4.16xlarge will give you additional write performance, and may allow you to use a single Aurora database instead of two or more sharded databases.

You can also set many advanced options on the next page, starting with network options such as the VPC and public accessibility:

You can set the cluster name and other database options. Encryption is easy to use and enabled by default; you can use the built-in default master key or choose one of your own:

You can also set failover behavior, the retention period for snapshot backups, and choose to enable collection of detailed (OS-level) metrics via Enhanced Monitoring:

After you have set it up to your liking, click on Launch DB Instance to proceed!

The new instances (primary and secondary since I specified Multi-AZ) are up and running within minutes:

Each PostgreSQL-compatible instance publishes 44 metrics to CloudWatch automatically:

With enhanced monitoring enabled, each instance collects additional per-instance and per-process metrics. It can be enabled when the instance is launched, or afterward, via Modify Instance. Here are some of the metrics collected when enhanced monitoring is enabled:

Clicking on Manage Graphs lets you choose which metrics are shown:

Per-process metrics are also available:

You can scale your read capacity by creating up to 15 Aurora replicas:

The cluster provides a single reader endpoint that you can access in order to load-balance requests across the replicas:

Performance Insights
As I noted earlier, Performance Insights is turned on automatically. This Amazon Aurora feature is wired directly into the database engine and allows you to look deep inside of each query, seeing the database resources that it uses and how they contribute to the overall response time. Here’s the initial view:

I can slice the view by SQL query in order to see how many concurrent copies of each query are running:

There are more views and options than I can fit in this post; to learn more take a look at Using Performance Insights.

Migrating to Amazon Aurora with PostgreSQL Compatibility
AWS Database Migration Service and the Schema Conversion Tool are ready to help you to move data stored in commercial and open-source databases to Amazon Aurora. The Schema Conversion Tool will perform a quick assessment of your database schemas and your code in order to help you to choose between MySQL and PostgreSQL. Our new, limited-time, Free DMS program allows you to use DMS and SCT to migrate to Aurora at no cost, with access to several types of DMS Instances for up to 6 months.

If you are already using PostgreSQL, you will be happy to hear that we support a long list of extensions including PostGIS and dblink.

Available Now
You can use Amazon Aurora with PostgreSQL Compatibility today in the US East (Northern Virginia), EU (Ireland), US West (Oregon), and US East (Ohio) Regions, with others to follow as soon as possible.

Jeff;

AWS Hot Startups – September 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-september-2017/

As consumers continue to demand faster, simpler, and more on-the-go services, FinTech companies are responding with ever more innovative solutions to fit everyone’s needs and to improve customer experience. This month, we are excited to feature the following startups—all of whom are disrupting traditional financial services in unique ways:

  • Acorns – allowing customers to invest spare change automatically.
  • Bondlinc – improving the bond trading experience for clients, financial institutions, and private banks.
  • Lenda – reimagining homeownership with a secure and streamlined online service.

Acorns (Irvine, CA)

Driven by the belief that anyone can grow wealth, Acorns is relentlessly pursuing ways to help make that happen. Currently the fastest-growing micro-investing app in the U.S., Acorns takes mere minutes to get started and is currently helping over 2.2 million people grow their wealth. And unlike other FinTech apps, Acorns is focused on helping America’s middle class – namely the 182 million citizens who make less than $100,000 per year – and looking after their financial best interests.

Acorns is able to help their customers effortlessly invest their money, little by little, by offering ETF portfolios put together by Dr. Harry Markowitz, a Nobel Laureate in economic sciences. They also offer a range of services, including “Round-Ups,” whereby customers can automatically invest spare change from every day purchases, and “Recurring Investments,” through which customers can set up automatic transfers of just $5 per week into their portfolio. Additionally, Found Money, Acorns’ earning platform, can help anyone spend smarter as the company connects customers to brands like Lyft, Airbnb, and Skillshare, who then automatically invest in customers’ Acorns account.

The Acorns platform runs entirely on AWS, allowing them to deliver a secure and scalable cloud-based experience. By utilizing AWS, Acorns is able to offer an exceptional customer experience and fulfill its core mission. Acorns uses Terraform to manage services such as Amazon EC2 Container Service, Amazon CloudFront, and Amazon S3. They also use Amazon RDS and Amazon Redshift for data storage, and Amazon Glacier to manage document retention.

Acorns is hiring! Be sure to check out their careers page if you are interested.

Bondlinc (Singapore)

Eng Keong, Founder and CEO of Bondlinc, has long wanted to standardize, improve, and automate the traditional workflows that revolve around bond trading. As a former trader at BNP Paribas and Jefferies & Company, E.K. – as Keong is known – had personally seen how manual processes led to information bottlenecks in over-the-counter practices. This drove him, along with future Bondlinc CTO Vincent Caldeira, to start a new service that maximizes efficiency, information distribution, and accessibility for both clients and bankers in the bond market.

Currently, bond trading requires banks to spend a significant amount of resources retrieving data from expensive and restricted institutional sources, performing suitability checks, and attaching required documentation before presenting all relevant information to clients – usually by email. Bankers are often overwhelmed by these time-consuming tasks, which means clients don’t always get proper access to time-sensitive bond information and pricing. Bondlinc bridges this gap between banks and clients by providing a variety of solutions, including easy access to basic bond information and analytics, updates of new issues and relevant news, consolidated management of your portfolio, and a chat function between banker and client. By making the bond market much more accessible to clients, Bondlinc is taking private banking to the next level, while improving efficiency of the banks as well.

As a startup running on AWS since inception, Bondlinc has built and operated its SaaS product by leveraging Amazon EC2, Amazon S3, Elastic Load Balancing, and Amazon RDS across multiple Availability Zones to provide its customers (namely, financial institutions) a highly available and seamlessly scalable product distribution platform. Bondlinc also makes extensive use of Amazon CloudWatch, AWS CloudTrail, and Amazon SNS to meet the stringent operational monitoring, auditing, compliance, and governance requirements of its customers. Bondlinc is currently experimenting with Amazon Lex to build a conversational interface into its mobile application via a chat-bot that provides trading assistance services.

To see how Bondlinc works, request a demo at Bondlinc.com.

Lenda (San Francisco, CA)

Lenda is a digital mortgage company founded by seasoned FinTech entrepreneur Jason van den Brand. Jason wanted to create a smarter, simpler, and more streamlined system for people to either get a mortgage or refinance their homes. With Lenda, customers can find out if they are pre-approved for loans, and receive accurate, real-time mortgage rate quotes from industry-experienced home loan advisors. Lenda’s advisors support customers through the loan process by providing financial advice and guidance for a seamless experience.

Lenda’s innovative platform allows borrowers to complete their home loans online from start to finish. Through a savvy combination of being a direct lender with proprietary technology, Lenda has simplified the mortgage application process to save customers time and money. With an interactive dashboard, customers know exactly where they are in the mortgage process and can manage all of their documents in one place. The company recently received its Series A funding of $5.25 million, and van den Brand shared that most of the capital investment will be used to improve Lenda’s technology and fulfill the company’s mission, which is to reimagine homeownership, starting with home loans.

AWS allows Lenda to scale its business while providing a secure, easy-to-use system for a faster home loan approval process. Currently, Lenda uses Amazon S3, Amazon EC2, Amazon CloudFront, Amazon Redshift, and Amazon WorkSpaces.

Visit Lenda.com to find out more.

Thanks for reading and see you in October for another round of hot startups!

-Tina

Block The Pirate Bay Within 10 Days, Dutch Court Tells ISPs

Post Syndicated from Andy original https://torrentfreak.com/block-the-pirate-bay-within-10-days-dutch-court-tells-isps-170922/

Three years ago in 2014, The Court of The Hague handed down its decision in a long-running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

Ruling against local anti-piracy outfit BREIN, which brought the case, the Court decided that a blockade would be ineffective and also restrict the ISPs’ entrepreneurial freedoms.

The Pirate Bay was unblocked while BREIN took its case to the Supreme Court, which in turn referred the matter to the EU Court of Justice for clarification. This June, the ECJ ruled that as a platform effectively communicating copyright works to the public, The Pirate Bay can indeed be blocked.

The ruling meant there were no major obstacles preventing the Dutch Supreme Court from ordering a future ISP blockade. Clearly, however, BREIN wanted a blocking decision more quickly. A decision handed down today means the anti-piracy group will achieve that in just a few days’ time.

The Hague Court of Appeal today ruled (Dutch) that the 2014 decision, which lifted the blockade against The Pirate Bay, is now largely obsolete.

“According to the Court of Appeal, the Hague Court did not give sufficient weight to the interests of the beneficiaries represented by BREIN,” BREIN said in a statement.

“The Court also wrongly looked at whether torrent traffic had been reduced by the blockade. It should have also considered whether visits to the website of The Pirate Bay itself decreased with a blockade, which speaks for itself.”

As a result, an IP address and DNS blockade of The Pirate Bay, similar to those already in place in the UK and other EU countries, will soon be put in place. BREIN says that four IP addresses will be affected along with hundreds of domain names through which the torrent platform can be reached.

The ISPs have been given just 10 days to put the blocks in place and if they fail there are fines of 2,000 euros per day, up to a maximum of one million euros.

“It is nice that obviously harmful and illegal sites like The Pirate Bay will be blocked again in the Netherlands,” says BREIN chief Tim Kuik.

“A very bad time for our culture, which was free to access via these sites, is now happily behind us.”

Today’s interim decision by the Court of Appeal will stand until the Supreme Court hands down its decision in the main case between BREIN and Ziggo / XS4ALL.

Looking forward, it seems extremely unlikely that the Supreme Court will hand down a conflicting decision, so we’re probably already looking at the beginning of the end for direct accessibility of The Pirate Bay in the Netherlands.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Things Pirates Do To Hinder Anti-Piracy Investigations

Post Syndicated from Andy original https://torrentfreak.com/the-things-pirates-do-to-hinder-anti-piracy-outfits-170909/

Dedicated Internet pirates dealing in fresh content or operating at any significant scale can be pretty sure that rightsholders and their anti-piracy colleagues are interested in their activities at some level.

With this in mind, most pirates these days are aware of things they can do to enhance their security, with products like VPNs often get discussed on the consumer side.

This week, in a report detailing the challenges social media poses to intellectual property rights, UK anti-piracy outfit Federation Against Copyright Theft published a list of techniques deployed by pirates that hinder their investigations.

Fake/hidden website registration details

“Website registration details are often fake or hidden, which provides no further links to the person controlling the domain and its illegal activities,” the group reveals.

Protected WHOIS records are nothing new and can sometimes be uncloaked by a determined adversary via court procedures. However, in the early stages of an investigation, open records provide leads that can be extremely useful in building an early picture about who might be involved in the operation of a website.

Having them hidden is a definite plus for pirate site operators, especially when the underlying details are also fake, which is particularly common practice. And, with companies like Peter Sunde’s Njalla entering the market, hiding registrations is easier than ever.

Overseas servers

“Investigating servers located offshore cause some specific problems for FACT’s law-enforcement partners. In order to complete a full investigation into an offshore server, a law-enforcement agency must liaise with its counterpart in the country where the server is located. The difficulties of obtaining evidence from other countries are well known,” FACT notes.

While FACT no doubt corresponds with entities overseas, the anti-piracy outfit has a history of targeting UK citizens who are reportedly infringing copyright. It regularly involves UK police in its investigations (FACT itself employs former police officers) but jurisdiction is necessarily limited to the UK.

It is possible to get overseas law enforcement entities involved to seize a server, for example, but they have to be convinced of the need to do so by the police, which isn’t easy and is usually reserved for more serious cases. The bottom line is that by placing a server a long way away from a pirate’s home territory, things can be made much more difficult for local investigators.

Torrent websites and DMCA compliance

“Some torrent website operators who maintain a high DMCA compliance rate will often use this to try to appease the law, while continuing to provide infringing links,” FACT says.

This is an interesting one. Under law in both the United States and Europe, service providers are required to remove infringing content from their systems when they are notified of its existence by a rightsholder or its agent. Not doing so can render them liable, if the content is indeed infringing.

What FACT appears to be saying is that sites that comply with the law, by removing infringing content when asked to, become more difficult targets for legal action. It sounds very obvious but the underlying suggestion is that compliance on the surface is used as a protective mechanism. No example sites are mentioned but the strategy has clearly hindered FACT.

Current legislation too vague to remove infringing live sports streams

“Current legislation is insufficient to effectively tackle the issue of websites illegally offering coverage of live sports events. Section 512 (c) of the Digital Millennium Copyright Act (DMCA) states that: upon notification of claimed infringement, the service provider should ‘respond expeditiously’ to remove or disable access to the copyright-infringing material. Most live sports events are under two hours long, so such non-specific timeframes for required action are inadequate,” FACT complains.

Since government reports like these can take a long time to prepare, it appears that FACT and its partners may have already found a solution to this particular problem. Major FACT client the Premier League now has a High Court injunction in place which allows it to block infringing streams on a real-time basis. It doesn’t remove the content at its source, but it still renders it largely inaccessible in the UK.

Nevertheless, FACT calls for takedowns to be actioned more swiftly, noting that “the law needs to reflect this narrow timeframe with a specified required response period for websites offering such live feeds.”

Camming content directly from cinema screen to the cloud

“Recent advancements in technology have made this a viable option to ‘cammers’ to avoid detection. Attempts to curtail and delete illicitly recorded film footage may become increasingly difficult with the emergence of streaming apps that automatically upload recorded video to cloud services,” FACT reports.

Over the years, FACT has been involved in numerous operations to hinder those who record movies with cameras in theaters and then upload them to the Internet. Once the perpetrator has exited the theater, FACT has effectively lost the battle, but the possibility that a live upload can now take place is certainly an interesting proposition.

“While enforcing officers may delete the footage held on the device, the footage has potentially already been stored remotely on a cloud system,” FACT warns.

Equally, this could also prove a problem for those seeking to secure evidence. With a cloud upload, the person doing the recording could safely delete the footage from the local device. That could be an obstacle to proving that an offense had even been committed when a suspect is confronted in situ.

Virtual currencies

“There is great potential in virtual currencies for money launderers and illicit traders. Government and law enforcement have raised concerns on how virtual currencies can be sent anonymously, leaving little or no trail for regulators or law-enforcement agencies,” FACT writes.

For many years, pirates of all kinds have relied on systems like PayPal, Mastercard, and Visa, to shift money around. However, these payment systems are now more difficult to deploy on pirate services and are more easily traced, even when operators manage to squeeze them through the gaps.

The same cannot be said of bitcoin and similar currencies that are gaining in popularity all the time. They are harder to use, of course, but there’s little doubt accessibility issues will be innovated out of the equation at some point. Once that happens, these currencies will be a force to be reckoned with.

The UK government’s Share and Share Alike report, which examines the challenges social media poses to intellectual property rights, can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Internet Archive Blocked in 2,650 Site Anti-Piracy Sweep

Post Syndicated from Andy original https://torrentfreak.com/internet-archive-blocked-in-2650-site-anti-piracy-sweep-170810/

Reports of sites becoming mysteriously inaccessible in India have been a regular occurance over the past several years. In many cases, sites simply stop functioning, leaving users wondering whether sites are actually down or whether there’s a technical issue.

Due to their increasing prevalence, fingers are often pointed at so-called ‘John Doe’ orders, which are handed down by the court to prevent Internet piracy. Often sweeping in nature (and in some cases pre-emptive rather than preventative), these injunctions have been known to block access to both file-sharing platforms and innocent bystanders.

Earlier this week (and again for no apparent reason), the world renowned Internet Archive was rendered inaccessible to millions of users in India. The platform, which is considered by many to be one of the Internet’s most valued resources, hosts more than 15 petabytes of data, a figure which grows on a daily basis. Yet despite numerous requests for information, none was forthcoming from authorities.

The ‘blocked’ message seen by users accessing Archive.org

Quoted by local news outlet Medianama, Chris Butler, Office Manager at the Internet Archive, said that their attempts to contact the Indian Department of Telecom (DoT) and the Ministry of Electronics and Information Technology (Meity) had proven fruitless.

Noting that site had previously been blocked in India, Butler said they were no clearer on the reasons why the same kind of action had seemingly been taken this week.

“We have no information about why a block would have been implemented,” he said. “Obviously, we are disappointed and concerned by this situation and are very eager to understand why it’s happening and see full access restored to archive.org.”

Now, however, the mystery has been solved. The BBC says a local government agency provided a copy of a court order obtained by two Bollywood production companies who are attempting to slow down piracy of their films in India.

Issued by a local judge, the sweeping order compels local ISPs to block access to 2,650 mainly file-sharing websites, including The Pirate Bay, RARBG, the revived KickassTorrents, and hundreds of other ‘usual suspects’. However, it also includes the URL for the Internet Archive, hence the problems with accessibility this week.

The injunction, which appears to be another John Doe order as previously suspected, was granted by the High Court of the Judicature at Madras on August 2, 2017. Two film productions companies – Prakash Jah Productions and Red Chillies Entertainment – obtained the order to protect their films Lipstick Under My Burkha and Jab Harry Met Sejal.

While India-based visitors to blocked resources are often greeted with a message saying that domains have been blocked at the orders of the Department of Telecommunications, these pages never give a reason why.

This always leads to confusion, with news outlets having to pressure local government agencies to discover the reason behind the blockades. In the interests of transparency, providing a link to a copy of a relevant court order would probably benefit all involved.

A few hours ago, the Internet Archive published a statement questioning the process undertaken before the court order was handed down.

“Is the Court aware of and did it consider the fact that the Internet Archive has a well-established and standard procedure for rights holders to submit take down requests and processes them expeditiously?” the platform said.

“We find several instances of take down requests submitted for one of the plaintiffs, Red Chillies Entertainments, throughout the past year, each of which were processed and responded to promptly.

“After a preliminary review, we find no instance of our having been contacted by anyone at all about these films. Is there a specific claim that someone posted these films to archive.org? If so, we’d be eager to address it directly with the claimant.”

But while the Internet Archive appears to be the highest profile collateral damage following the ISP blocks, it isn’t the only victim. Now that the court orders have become available (1,2), it’s clear that other non-pirate entities have also been affected including news site WN.com, website hosting service Weebly, and French ISP Free.fr.

Also, in a sign that sites aren’t being checked to see if they host the movies in question, one of the orders demands that former torrent index BitSnoop is blocked. The site shut down earlier this year. The same is true for Shaanig.org.

This is not the first time that the Internet Archive has been blocked in India. In 2014/2015, Archive.org was rendered inaccessible after it was accused of hosting extremist material. In common with Google, the site copies and stores huge amounts of data, much of it in automated processes. This can leave it exposed to these kinds of accusations.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Encrypted Media Extensions a W3C Recommendation

Post Syndicated from ris original https://lwn.net/Articles/727499/rss

Encrypted Media Extensions (EME) have been under review by the W3C Advisory
Committee since last March. This report
from the committee
addresses comments and objections to EME.
After consideration of the issues, the Director reached a decision
that the EME specification should move to W3C Recommendation. The Encrypted
Media Extensions specification remains a better alternative for users than
other platforms, including for reasons of security, privacy, and
accessibility, by taking advantage of the Web platform. While additional
work in some areas may be beneficial for the future of the Web Platform, it
remains appropriate for the W3C to make the EME specification a W3C
Recommendation. Formal publication of the W3C Recommendation will happen at
a later date. We encourage W3C Members and the community to work in both
technical and policy areas to find better solutions in this space.

The Free Software Foundation’s Defective by Design campaign opposes
EME
arguing that it infringes on Web users’ control of their own
computers, and weakens their security and privacy. “Opponents’ last opportunity to stop EME is an appeal by the Advisory Committee of the World Wide Web Consortium (W3C), the body which Tim Berners-Lee heads. Requiring 5% of the Committee’s 475 members (corporate, nonprofit, and educational institutions) to sign on within a two-week period, the appeal would then trigger a vote from the whole Committee to make a final decision to ratify or reject EME.

Yahoo Mail’s New Tech Stack, Built for Performance and Reliability

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/162320493306

By Suhas Sadanandan, Director of Engineering 

When it comes to performance and reliability, there is perhaps no application where this matters more than with email. Today, we announced a new Yahoo Mail experience for desktop based on a completely rewritten tech stack that embodies these fundamental considerations and more.

We built the new Yahoo Mail experience using a best-in-class front-end tech stack with open source technologies including React, Redux, Node.js, react-intl (open-sourced by Yahoo), and others. A high-level architectural diagram of our stack is below.

image

New Yahoo Mail Tech Stack

In building our new tech stack, we made use of the most modern tools available in the industry to come up with the best experience for our users by optimizing the following fundamentals:

Performance

A key feature of the new Yahoo Mail architecture is blazing-fast initial loading (aka, launch).

We introduced new network routing which sends users to their nearest geo-located email servers (proximity-based routing). This has resulted in a significant reduction in time to first byte and should be immediately noticeable to our international users in particular.

We now do server-side rendering to allow our users to see their mail sooner. This change will be immediately noticeable to our low-bandwidth users. Our application is isomorphic, meaning that the same code runs on the server (using Node.js) and the client. Prior versions of Yahoo Mail had programming logic duplicated on the server and the client because we used PHP on the server and JavaScript on the client.   

Using efficient bundling strategies (JavaScript code is separated into application, vendor, and lazy loaded bundles) and pushing only the changed bundles during production pushes, we keep the cache hit ratio high. By using react-atomic-css, our homegrown solution for writing modular and scoped CSS in React, we get much better CSS reuse.  

In prior versions of Yahoo Mail, the need to run various experiments in parallel resulted in additional branching and bloating of our JavaScript and CSS code. While rewriting all of our code, we solved this issue using Mendel, our homegrown solution for bucket testing isomorphic web apps, which we have open sourced.  

Rather than using custom libraries, we use native HTML5 APIs and ES6 heavily and use PolyesterJS, our homegrown polyfill solution, to fill the gaps. These factors have further helped us to keep payload size minimal.

With all the above optimizations, we have been able to reduce our JavaScript and CSS footprint by approximately 50% compared to the previous desktop version of Yahoo Mail, helping us achieve a blazing-fast launch.

In addition to initial launch improvements, key features like search and message read (when a user opens an email to read it) have also benefited from the above optimizations and are considerably faster in the latest version of Yahoo Mail.

We also significantly reduced the memory consumed by Yahoo Mail on the browser. This is especially noticeable during a long running session.

Reliability

With this new version of Yahoo Mail, we have a 99.99% success rate on core flows: launch, message read, compose, search, and actions that affect messages. Accomplishing this over several billion user actions a day is a significant feat. Client-side errors (JavaScript exceptions) are reduced significantly when compared to prior Yahoo Mail versions.

Product agility and launch velocity

We focused on independently deployable components. As part of the re-architecture of Yahoo Mail, we invested in a robust continuous integration and delivery flow. Our new pipeline allows for daily (or more) pushes to all Mail users, and we push only the bundles that are modified, which keeps the cache hit ratio high.

Developer effectiveness and satisfaction

In developing our tech stack for the new Yahoo Mail experience, we heavily leveraged open source technologies, which allowed us to ensure a shorter learning curve for new engineers. We were able to implement a consistent and intuitive onboarding program for 30+ developers and are now using our program for all new hires. During the development process, we emphasise predictable flows and easy debugging.

Accessibility

The accessibility of this new version of Yahoo Mail is state of the art and delivers outstanding usability (efficiency) in addition to accessibility. It features six enhanced visual themes that can provide accommodation for people with low vision and has been optimized for use with Assistive Technology including alternate input devices, magnifiers, and popular screen readers such as NVDA and VoiceOver. These features have been rigorously evaluated and incorporate feedback from users with disabilities. It sets a new standard for the accessibility of web-based mail and is our most-accessible Mail experience yet.

Open source 

We have open sourced some key components of our new Mail stack, like Mendel, our solution for bucket testing isomorphic web applications. We invite the community to use and build upon our code. Going forward, we plan on also open sourcing additional components like react-atomic-css, our solution for writing modular and scoped CSS in React, and lazy-component, our solution for on-demand loading of resources.

Many of our company’s best technical minds came together to write a brand new tech stack and enable a delightful new Yahoo Mail experience for our users.

We encourage our users and engineering peers in the industry to test the limits of our application, and to provide feedback by clicking on the Give Feedback call out in the lower left corner of the new version of Yahoo Mail.

Scratch 2.0: all-new features for your Raspberry Pi

Post Syndicated from Rik Cross original https://www.raspberrypi.org/blog/scratch-2-raspberry-pi/

We’re very excited to announce that Scratch 2.0 is now available as an offline app for the Raspberry Pi! This new version of Scratch allows you to control the Pi’s GPIO (General Purpose Input and Output) pins, and offers a host of other exciting new features.

Offline accessibility

The most recent update to Raspbian includes the app, which makes Scratch 2.0 available offline on the Raspberry Pi. This is great news for clubs and classrooms, where children can now use Raspberry Pis instead of connected laptops or desktops to explore block-based programming and physical computing.

Controlling GPIO with Scratch 2.0

As with Scratch 1.4, Scratch 2.0 on the Raspberry Pi allows you to create code to control and respond to components connected to the Pi’s GPIO pins. This means that your Scratch projects can light LEDs, sound buzzers and use input from buttons and a range of sensors to control the behaviour of sprites. Interacting with GPIO pins in Scratch 2.0 is easier than ever before, as text-based broadcast instructions have been replaced with custom blocks for setting pin output and getting current pin state.

Scratch 2.0 GPIO blocks

To add GPIO functionality, first click ‘More Blocks’ and then ‘Add an Extension’. You should then select the ‘Pi GPIO’ extension option and click OK.

Scratch 2.0 GPIO extension

In the ‘More Blocks’ section you should now see the additional blocks for controlling and responding to your Pi GPIO pins. To give an example, the entire code for repeatedly flashing an LED connected to GPIO pin 2.0 is now:

Flashing an LED with Scratch 2.0

To react to a button connected to GPIO pin 2.0, simply set the pin as input, and use the ‘gpio (x) is high?’ block to check the button’s state. In the example below, the Scratch cat will say “Pressed” only when the button is being held down.

Responding to a button press on Scractch 2.0

Cloning sprites

Scratch 2.0 also offers some additional features and improvements over Scratch 1.4. One of the main new features of Scratch 2.0 is the ability to create clones of sprites. Clones are instances of a particular sprite that inherit all of the scripts of the main sprite.

The scripts below show how cloned sprites are used — in this case to allow the Scratch cat to throw a clone of an apple sprite whenever the space key is pressed. Each apple sprite clone then follows its ‘when i start as clone’ script.

Cloning sprites with Scratch 2.0

The cloning functionality avoids the need to create multiple copies of a sprite, for example multiple enemies in a game or multiple snowflakes in an animation.

Custom blocks

Scratch 2.0 also allows the creation of custom blocks, allowing code to be encapsulated and used (possibly multiple times) in a project. The code below shows a simple custom block called ‘jump’, which is used to make a sprite jump whenever it is clicked.

Custom 'jump' block on Scratch 2.0

These custom blocks can also optionally include parameters, allowing further generalisation and reuse of code blocks. Here’s another example of a custom block that draws a shape. This time, however, the custom block includes parameters for specifying the number of sides of the shape, as well as the length of each side.

Custom shape-drawing block with Scratch 2.0

The custom block can now be used with different numbers provided, allowing lots of different shapes to be drawn.

Drawing shapes with Scratch 2.0

Peripheral interaction

Another feature of Scratch 2.0 is the addition of code blocks to allow easy interaction with a webcam or a microphone. This opens up a whole new world of possibilities, and for some examples of projects that make use of this new functionality see Clap-O-Meter which uses the microphone to control a noise level meter, and a Keepie Uppies game that uses video motion to control a football. You can use the Raspberry Pi or USB cameras to detect motion in your Scratch 2.0 projects.

Other new features include a vector image editor and a sound editor, as well as lots of new sprites, costumes and backdrops.

Update your Raspberry Pi for Scratch 2.0

Scratch 2.0 is available in the latest Raspbian release, under the ‘Programming’ menu. We’ve put together a guide for getting started with Scratch 2.0 on the Raspberry Pi online (note that GPIO functionality is only available via the desktop version). You can also try out Scratch 2.0 on the Pi by having a go at a project from the Code Club projects site.

As always, we love to see the projects you create using the Raspberry Pi. Once you’ve upgraded to Scratch 2.0, tell us about your projects via Twitter, Instagram and Facebook, or by leaving us a comment below.

The post Scratch 2.0: all-new features for your Raspberry Pi appeared first on Raspberry Pi.

Surveillance and our Insecure Infrastructure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/surveillance_an_2.html

Since Edward Snowden revealed to the world the extent of the NSA’s global surveillance network, there has been a vigorous debate in the technological community about what its limits should be.

Less discussed is how many of these same surveillance techniques are used by other — smaller and poorer — more totalitarian countries to spy on political opponents, dissidents, human rights defenders; the press in Toronto has documented some of the many abuses, by countries like Ethiopia , the UAE, Iran, Syria, Kazakhstan , Sudan, Ecuador, Malaysia, and China.

That these countries can use network surveillance technologies to violate human rights is a shame on the world, and there’s a lot of blame to go around.

We can point to the governments that are using surveillance against their own citizens.

We can certainly blame the cyberweapons arms manufacturers that are selling those systems, and the countries — mostly European — that allow those arms manufacturers to sell those systems.

There’s a lot more the global Internet community could do to limit the availability of sophisticated Internet and telephony surveillance equipment to totalitarian governments. But I want to focus on another contributing cause to this problem: the fundamental insecurity of our digital systems that makes this a problem in the first place.

IMSI catchers are fake mobile phone towers. They allow someone to impersonate a cell network and collect information about phones in the vicinity of the device and they’re used to create lists of people who were at a particular event or near a particular location.

Fundamentally, the technology works because the phone in your pocket automatically trusts any cell tower to which it connects. There’s no security in the connection protocols between the phones and the towers.

IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance that happens at the sites you visit, by companies like Facebook and Google, this surveillance happens at the point where your computer connects to the Internet. Here, someone can eavesdrop on everything you do.

This system also exploits existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.

There are many other examples. What they all have in common is that they are vulnerabilities in our underlying digital communications systems that allow someone — whether it’s a country’s secret police, a rival national intelligence organization, or criminal group — to break or bypass what security there is and spy on the users of these systems.

These insecurities exist for two reasons. First, they were designed in an era where computer hardware was expensive and inaccessibility was a reasonable proxy for security. When the mobile phone network was designed, faking a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would go to the effort of creating such towers.

At the same time, computers were less powerful and software was much slower, so adding security into the system seemed like a waste of resources. Fast forward to today: computers are cheap and software is fast, and what was impossible only a few decades ago is now easy.

The second reason is that governments use these surveillance capabilities for their own purposes. The FBI has used IMSI-catchers for years to investigate crimes. The NSA uses IP interception systems to collect foreign intelligence. Both of these agencies, as well as their counterparts in other countries, have put pressure on the standards bodies that create these systems to not implement strong security.

Of course, technology isn’t static. With time, things become cheaper and easier. What was once a secret NSA interception program or a secret FBI investigative tool becomes usable by less-capable governments and cybercriminals.

Man-in-the-middle attacks against Internet connections are a common criminal tool to steal credentials from users and hack their accounts.

IMSI-catchers are used by criminals, too. Right now, you can go onto Alibaba.com and buy your own IMSI catcher for under $2,000.

Despite their uses by democratic governments for legitimate purposes, our security would be much better served by fixing these vulnerabilities in our infrastructures.

These systems are not only used by dissidents in totalitarian countries, they’re also used by legislators, corporate executives, critical infrastructure providers, and many others in the US and elsewhere.

That we allow people to remain insecure and vulnerable is both wrongheaded and dangerous.

Earlier this month, two American legislators — Senator Ron Wyden and Rep Ted Lieu — sent a letter to the chairman of the Federal Communications Commission, demanding that he do something about the country’s insecure telecommunications infrastructure.

They pointed out that not only are insecurities rampant in the underlying protocols and systems of the telecommunications infrastructure, but also that the FCC knows about these vulnerabilities and isn’t doing anything to force the telcos to fix them.

Wyden and Lieu make the point that fixing these vulnerabilities is a matter of US national security, but it’s also a matter of international human rights. All modern communications technologies are global, and anything the US does to improve its own security will also improve security worldwide.

Yes, it means that the FBI and the NSA will have a harder job spying, but it also means that the world will be a safer and more secure place.

This essay previously appeared on AlJazeera.com.

Cisco Draws Attention To The Rise of Pirate IPTV

Post Syndicated from Andy original https://torrentfreak.com/cisco-draws-attention-to-the-rise-of-pirate-iptv-170413/

It’s often said that the peer-to-peer file-sharing boom of the early 2000s was fueled by the entertainment industries’ failure to offer their content in the digital domain. It took years for them to respond but content is now more widely available online than any time in history.

Of course, in the background there are still millions of people who prefer to get their content fix for free. The lure of BitTorrent and unauthorized online streaming platforms remains strong but there is a noticeable trend of people wanting their content delivered to a TV instead of a computer screen.

With that in mind, it’s no wonder that the rise of Kodi has been so dramatic. The legal media player augmented with a wide range of unauthorized third-party plugins is a pirate’s dream, providing huge volumes of premium content at zero cost. The system does have its drawbacks, however.

Depending on the content, at times these setups can be unreliable, particularly when it comes to offering live sports. Streams are often low quality, if they stay online at all, a frustration to anyone trying to enjoy a real-time event.

One solution to this problem is to buy a $100 package from an official TV provider. Another is to pay a few dollars, euros, or pounds a month to an illegal IPTV supplier, who will provide thousands of often HD quality channels and VOD, with much greater reliability and accessibility than anything available for free.

When monitoring various anti-piracy companies, it’s clear that IPTV is a growing concern. Companies like Premier League partners Irdeto are clearly involved in tackling the issue and now networking company Cisco has shown its hand.

Cisco says it has spent years monitoring all kinds of piracy but its research into illicit IPTV suggests momentum in the area. Describing the Internet as a “great equalizer”, it says that some services could be “run by a teenager on a home PC” while others have companies and racks of servers behind them. Despite the apparent gray area, the big ones do stand out, however.

Cisco homes in on one particular supplier called RapidIPTV. According to its own data the company has 170,000 subscribers. Although market prices vary depending on package size and whether a subscription is bought through a reseller, each could be worth around $10 to $15 per month. Clearly there are significant sums of money involved, even for new entrants to the market.

“Servers which have only been recently launched can be expected to have hundreds of subscribers. Well-established servers might have 5,000-10,000 subscribers,” Cisco says.

The company says it spends time scouring online forums, the main marketplaces where IPTV packages are made available. One of the most active forums has 285 IPTV servers on offer.

“Conservatively, it’s fair to assess that for these 285 servers, there’s an average of 1,000 viewers per server in this highly competitive environment,” Cisco says.

“When we extrapolate from the forum described above, we can conclude that this single forum can easily reach 285,000 viewers. An average forum has offerings of hundreds of servers and each one has thousands of customers.

“So, with hundreds of popular forums worldwide and thousands of customers per site, the global scale of the problem is clearly visible to those of us visiting the forums and maintaining statistics.”

While Cisco doesn’t name the forum in question, the site has been operating openly for many years, without obvious signs of legal trouble. The same cannot be said about people who actually provide the services, however.

As highlighted by various raids in the UK, Spain, Bulgaria and elsewhere, the authorities are certainly very aware of the problem. These raids won’t be the last either, something that may put pressure on supply.

“I expect our prices to go up during the next few months,” one supplier told TF on condition of anonymity. “We all know that IPTV is in the crosshairs so suppliers who can’t stand the heat will get out. That might cause capacity problems but you know, it’s a moth to a flame. Others will setup, there’s money on offer. We’ll see.”

The rise of IPTV is certainly interesting from a piracy perspective, since it appears to offer a stepping stone for those currently disappointed by free web streaming but can’t or won’t pay the relatively large sums demanded by broadcasters.

Also of interest is whether companies such as the newly aggressive Netflix will begin to see IPTV services that come bundled with VOD packages as serious competitors. After all, for around the same price as Netflix, it’s possible to get a premium IPTV service with thousands of movies on demand.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tough Pi-ano

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/24-hour-engineers-tough-pi-ano/

The Tough Pi-ano needs to live up to its name as a rugged, resilient instrument for a very good reason: kids.

Tough Pi-ano

Brian ’24 Hour Engineer’ McEvoy made the Tough Pi-ano as a gift to his aunt and uncle, for use in their centre for children with learning and developmental disabilities such as autism and Down’s syndrome. This easily accessible device uses heavy-duty arcade buttons and has a smooth, solid wood body with no sharp corners.

24 Hour Engineer Presents the Tough Pi-ano

24 Hour Engineer is a channel to showcase the things I’ve built. Instructions for the Tough Pi-ano can be found at my website, 24HourEngineer.com and searcing for “Tough Pi-ano.” http://www.24hourengineer.com/search?q=%22Tough+PiAno%22&max-results=20&by-date=true

The Pi-ano has four octaves of buttons, each controlled by a Raspberry Pi Zero. Each Zero is connected to a homebrew resistor board; this board, in turn, is connected to the switches that control the arcade buttons.

Tough Pi-ano

The Tough Pi-ano is designed specifically for musical therapy, so it has a clean and uncomplicated design. It has none of the switches and sliders you’d usually expect to find on an electronic keyboard.

Tough Pi-ano

The simple body, with its resilient keys, allows the Tough Pi-ano to stand up to lots of vigorous playing and forceful treatment, providing an excellent resource for the centre.

The post Tough Pi-ano appeared first on Raspberry Pi.

Inclusive learning at South London Raspberry Jam

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/inclusive-learning-south-london-raspberry-jam/

Raspberry Pi Certified Educator Grace Owolade-Coombes runs the fantastically inclusive South London Raspberry Jam with her son Femi. In this guest post, she gives us the low-down on how the Jam got started. Enjoy!

Grace and Femi

Grace and Femi Owolade-Coombes

Our Jam has been running for over a year now; we’ve had three really big events and one smaller family hack day. Let me begin by telling you about how the idea of running a Jam arose in the first place.

Around three years ago, I read about how coding was going to be part of the curriculum in primary and secondary schools and, as a teacher in the FE sector, I was intrigued. As I also had a young and inquisitive son, who was at primary school at the time, I felt that we should investigate further.

National STEM Centre

Grace visited the National STEM Learning Centre in York for a course which introduced her to coding.

I later attended a short course at the National STEM Learning Centre in York, during which one of the organisers told me about the Raspberry Pi Foundation; he suggested I come to a coding event back at the Centre a few weeks later with my family. We did, and Femi loved the Minecraft hack.

Note from Alex: not the actual Minecraft hack but I’ll be having words with our resource gurus because this would be brilliant!

The first Raspberry Jam we attended was in Southend with Andy Melder and the crew: it showed us just how welcoming the Jam community can be. Then I was lucky enough to attend Picademy, which truly was a transformative experience. Ben Nuttall showed me how to tweet photographs with the Pi, which was the beginning of me using Twitter. I particularly loved Clive Beale’s physical computing workshop which I took back and delivered to Femi.

Grace Owolade-Coombes with Carrie Anne Philbin

Picademy gave Grace the confidence to deliver Raspberry Pi training herself.

After Picademy, I tweeted that I was now a Raspberry Pi Certified Educator and immediately got a request from Dragon Hall, Convent Garden to run a workshop – I didn’t realise they meant in three days’ time! Femi and I bit the bullet and ran our first physical computing workshop together. We haven’t looked back since.

Festival of Code Femi

Femi went on to join the Festival of Code, which he loved.

Around this time, Femi was attending a Tourettes Action support group, where young people with Tourette’s syndrome, like him, met up. Femi wanted to share his love of coding with them, but he felt that they might be put off as it can be difficult to spend extended amounts of time in public places when you have tics. He asked if we could set up a Jam that was inclusive: it would be both autism- and Tourette’s syndrome-friendly. There was such a wealth of support, advice, and volunteers who would help us set up that it really wasn’t a hard decision to make.

Femi Owolade-Coombes

Grace and Femi set up an Indiegogo campaign to help fund their Jam.

We were fortunate to have met Marc Grossman during the Festival of Code: with his amazing skills and experience with Code Club, we set up together. For our first Jam, we had young coding pioneers from the community, such as Yasmin Bey and Isreal Genius, to join us. We were also blessed with David Whale‘s company and Kano even did a workshop with us. There are too many amazing people to mention.

South London Jam

Grace and Femi held the first South London Raspberry Jam, an autism- and Tourette’s syndrome-friendly event for five- to 15-year-olds, at Deptford Library in October 2016, with 75 participants.

We held a six-session Code Club in Catford Library followed by a second Jam in a local community centre, focusing on robotics with the CamJam EduKit 3, as well as the usual Minecraft hacks.

Our third Jam was in conjunction with Kano, at their HQ, and included a SEN TeachMeet with Computing at School (CAS). Joseph Birks, the inventor of the Crumble, delivered a great robot workshop, and Paul Haynes delivered a Unity workshop too.

Family Hack Day

Grace and Femi’s latest event was a family hack day in conjunction with the London Connected Learning Centre.

Femi often runs workshops at our Jams. We try to encourage young coders to follow in Femi’s footsteps and deliver sessions too: it works best when young people learn from each other, and we hope the confidence they develop will enable them to help their friends and classmates to enjoy coding too.

Inclusivity, diversity, and accessibility are at the heart of our Jams, and we are proud to have Tourettes Action and Ambitious about Autism as partners.

Tourettes Action on Twitter

All welcome to this event in London SAT, 12 DEC 2015 AT 13:00 2nd South London Raspberry Jam 2015 Bellingham… https://t.co/TPYC9Ontot

Now we are taking stock of our amazing journey to learn about coding, and preparing to introduce it to more people. Presently we are looking to collaborate with the South London Makerspace and the Digital Maker Collective, who have invited Femi to deliver robot workshops at Tate Modern. We are also looking to progress to more project-based activities which fit with the Raspberry Pi Foundation’s Pioneers challenges.

Femi Astro Pi

South London Raspberry Jam has participated in both Pi Wars and Astro Pi.

Femi writes about all the events we attend or run: see hackerfemo.com or check out our website and sign up to our mailing list to keep informed. We are just about to gather a team for the Pioneers project, so watch out for updates.

The post Inclusive learning at South London Raspberry Jam appeared first on Raspberry Pi.

Leading Kodi Addon Repository TV Addons Reveals Plans for 2017

Post Syndicated from Andy original https://torrentfreak.com/leading-kodi-addon-repository-tv-addons-reveals-plans-for-2017-161227/

By now, most readers will understand how the Kodi experience operates. The software itself is an entirely legal media player but one which can be augmented with various addons.

For millions of users, it is these addons that really give Kodi its spice. Created by third-party developers, these add-ons allow Kodi to serve a massive range of often pirated content, including movies, TV shows (both live and on demand), plus sports and PPVs.

TV Addons is one of the world’s largest repositories of Kodi addons. It is not affiliated with the creators of Kodi but has become host to some of the software’s most-used tools. As such, its popularity has soared in recent months.

With Kodi add-ons now a major talking point around the world, TF caught up with Eleazar from TV Addons to get his thoughts for 2017.

“We never intended our community to be considered to be a source for sketchy content. There are many popular addons that have been developed as a more convenient way to access legitimately licensed content,” he explains.

“Some of the best and most popular addons include USTVnow, FilmOn, BBC iPlayer and EarthCam. We make open source addons to make the viewing of online content feel more natural when you’re watching on your living room TV.”

While there doesn’t appear to be any intention of reducing the availability of the most popular and controversial addons, Eleazar says a greater emphasis will be placed on tools that don’t fall into potential legal gray areas. Accessibility will also be improved.

“In the New Year, we hope to boost the more legitimate type of addons, from both the development perspective and end user point of view. We’re also in the process of streamlining the entire Kodi addon experience, making it easier for everyone, with our new website design coming very, very soon,” he adds.

In addition to all the excitement over Kodi and its addons in 2016, there have been considerable amounts of bad news for people who have tried to monetize the experience. Hundreds of sellers of pre-configured Android devices have popped up on eBay, Amazon, and other marketplaces, promising free media for all. Some have been arrested, particularly in the UK where there have been quite a few raids. Eleazar hopes there will be less of this in 2017.

“We really hope that people stop selling cheap pre-programmed Android TV devices. The main reason for our concern is the recent media attention these people have been getting. They are bundling Kodi addons with paid IPTV (something we are strongly against) and then getting busted for it and bringing negative attention to everyone,” he says.

“Secondly, in most cases, the kind of people who sell these devices aren’t the type of people who care about their reputations and thus end up making ridiculous promises to their customers, promises they will never be able to keep. People are maintaining addons for free, and profiteers are ruining it for everyone.”

Eleazar also has harsh words for the much of the hardware these suppliers sell to the public. Often of poor quality from the Far East, these devices can fail to live up to their billing. Better options are available for people to buy themselves, he says.

“If anyone is wondering what the best Kodi devices are, I’ll make it very clear. There’s no point in buying some cheap Chinese-manufactured no name device when you can buy much better devices manufactured by big electronics brands,” he advises.

“If you have no budget, the NVIDIA Shield TV is definitely the top choice. The second best choice would be the Xiaomi Mi Box, which sells for a mere $69 at Wal-Mart, and the third choice would be the Amazon Fire TV. Kodi can be installed to the first two in a few clicks through the Google Play Store, while it would need to be sideloaded to the Amazon Fire TV due to the lack of Android app store support.”

2017 is likely to be another big year for Kodi, its addons, and the streaming services that underpin them. As previously reported, the relationship between all three is somewhat uneasy, with the makers of Kodi and the operators of streaming sites both annoyed (1,2) at the creators of many addons.

Nevertheless, it seems the platform is here to stay – at least until the next big thing comes along.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Weekly roundup: National Novelty Writing Month

Post Syndicated from Eevee original https://eev.ee/dev/2016/11/07/weekly-roundup-national-novelty-writing-month/

Inktober is a distant memory.

Now it’s time for NaNoWriMo! Almost. I don’t have any immediate interest in writing a novel, but I do have plenty of other stuff that needs writing — blog posts, my book, Runed Awakening, etc. So I’m going to try to write 100,000 words this month, spread across whatever.

Rules:

  1. I’m only measuring, like, works. I’ll count this page, as short as it is, because it’s still a single self-contained thing that took some writing effort. But no tweets or IRC or the like.

  2. I’m counting with vim’s g C-g or wc -w, whichever is more convenient. The former is easier for single files I edit in vim; the latter is easier for multiple files or stuff I edit outside of vim.

  3. I’m making absolutely zero effort to distinguish between English text, code, comments, etc.; whatever the word count is, that’s what it is. So code snippets in the book will count, as will markup in blog posts. Runed Awakening is a weird case, but I’m choosing to count it because it’s inherently a text-based game, plus it’s written in a prosaic language. On the other hand, dialogue for Isaac HD does not count, because it’s a few bits of text in what is otherwise just a Lua codebase.

  4. Only daily net change counts. This rule punishes me for editing, but that’s the entire point of NaNoWriMo’s focus on word count: to get something written rather than linger on a section forever and edit it to death. I tend to do far too much of the latter.

    This rule already bit me on day one, where I made some significant progress on Runed Awakening but ended up with a net word count of -762 because it involved some serious refactoring. Oops. Turns out word-counting code is an even worse measure of productivity than line-counting code.

These rules are specifically crafted to nudge me into working a lot more on my book and Runed Awakening, those two things I’d hoped to get a lot further on in the last three months. And unlike Inktober, blog posts contribute towards my preposterous goal rather than being at odds with it.

With one week down, so far I’m at +8077 words. I got off to a pretty slow (negative, in fact) start, and then spent a day out of action from an ear infection, so I’m a bit behind. Hoping I can still catch up as I get used to this whole “don’t rewrite the same paragraph over and over for hours” approach.

  • art: Last couple ink drawings of Pokémon, hallelujah. I made a montage of them all, too.

    I drew Momo (the cat from Google’s Halloween doodle game) alongside Isaac and it came out spectacularly well.

    I finally posted the loophole commission.

    I posted a little “what type am I” meme on Twitter and drew some of the interesting responses. I intended to draw a couple more, but then I got knocked on my ass and my brain stopped working. I still might get back to them later.

  • blog: I posted an extremely thorough teardown of JavaScript. That might be cheating, but it’s okay, because I love cheating.

    Wrote a whole lot about Java.

  • doom: I did another speedmap. I haven’t released the last two yet; I want to do a couple more and release them as a set.

  • blog: I wrote about game accessibility, which touched on those speedmaps.

  • runed awakening: I realized I didn’t need all the complexity of (and fallout caused by) the dialogue extension I was using, so I ditched it in favor of something much simpler. I cleaned up some stuff, fixed some stuff, improved some stuff, and started on some stuff. You know.

  • book: I’m working on the PICO-8 chapter, since I’ve actually finished the games it describes. I’m having to speedily reconstruct the story of how I wrote Under Construction, which is interesting. I hope it still comes out like a story and not a tutorial.

As for the three big things, well, they sort of went down the drain. I thought they might; I don’t tend to be very good at sticking with the same thing for a long and contiguous block of time. I’m still making steady progress on all of them, though, and I did some other interesting stuff in the last three months, so I’m satisfied regardless.

With November devoted almost exclusively to writing, I’m really hoping I can finally have a draft chapter of the book ready for Patreon by the end of the month. That $4 tier has kinda been languishing, sorry.

Weekly roundup: National Novelty Writing Month

Post Syndicated from Eevee original https://eev.ee/dev/2016/11/07/weekly-roundup-national-novelty-writing-month/

Inktober is a distant memory.

Now it’s time for NaNoWriMo! Almost. I don’t have any immediate interest in writing a novel, but I do have plenty of other stuff that needs writing — blog posts, my book, Runed Awakening, etc. So I’m going to try to write 100,000 words this month, spread across whatever.

Rules:

  1. I’m only measuring, like, works. I’ll count this page, as short as it is, because it’s still a single self-contained thing that took some writing effort. But no tweets or IRC or the like.

  2. I’m counting with vim’s g C-g or wc -w, whichever is more convenient. The former is easier for single files I edit in vim; the latter is easier for multiple files or stuff I edit outside of vim.

  3. I’m making absolutely zero effort to distinguish between English text, code, comments, etc.; whatever the word count is, that’s what it is. So code snippets in the book will count, as will markup in blog posts. Runed Awakening is a weird case, but I’m choosing to count it because it’s inherently a text-based game, plus it’s written in a prosaic language. On the other hand, dialogue for Isaac HD does not count, because it’s a few bits of text in what is otherwise just a Lua codebase.

  4. Only daily net change counts. This rule punishes me for editing, but that’s the entire point of NaNoWriMo’s focus on word count: to get something written rather than linger on a section forever and edit it to death. I tend to do far too much of the latter.

    This rule already bit me on day one, where I made some significant progress on Runed Awakening but ended up with a net word count of -762 because it involved some serious refactoring. Oops. Turns out word-counting code is an even worse measure of productivity than line-counting code.

These rules are specifically crafted to nudge me into working a lot more on my book and Runed Awakening, those two things I’d hoped to get a lot further on in the last three months. And unlike Inktober, blog posts contribute towards my preposterous goal rather than being at odds with it.

With one week down, so far I’m at +8077 words. I got off to a pretty slow (negative, in fact) start, and then spent a day out of action from an ear infection, so I’m a bit behind. Hoping I can still catch up as I get used to this whole “don’t rewrite the same paragraph over and over for hours” approach.

  • art: Last couple ink drawings of Pokémon, hallelujah. I made a montage of them all, too.

    I drew Momo (the cat from Google’s Halloween doodle game) alongside Isaac and it came out spectacularly well.

    I finally posted the loophole commission.

    I posted a little “what type am I” meme on Twitter and drew some of the interesting responses. I intended to draw a couple more, but then I got knocked on my ass and my brain stopped working. I still might get back to them later.

  • blog: I posted an extremely thorough teardown of JavaScript. That might be cheating, but it’s okay, because I love cheating.

    Wrote a whole lot about Java.

  • doom: I did another speedmap. I haven’t released the last two yet; I want to do a couple more and release them as a set.

  • blog: I wrote about game accessibility, which touched on those speedmaps.

  • runed awakening: I realized I didn’t need all the complexity of (and fallout caused by) the dialogue extension I was using, so I ditched it in favor of something much simpler. I cleaned up some stuff, fixed some stuff, improved some stuff, and started on some stuff. You know.

  • book: I’m working on the PICO-8 chapter, since I’ve actually finished the games it describes. I’m having to speedily reconstruct the story of how I wrote Under Construction, which is interesting. I hope it still comes out like a story and not a tutorial.

As for the three big things, well, they sort of went down the drain. I thought they might; I don’t tend to be very good at sticking with the same thing for a long and contiguous block of time. I’m still making steady progress on all of them, though, and I did some other interesting stuff in the last three months, so I’m satisfied regardless.

With November devoted almost exclusively to writing, I’m really hoping I can finally have a draft chapter of the book ready for Patreon by the end of the month. That $4 tier has kinda been languishing, sorry.

Weekly roundup: Inktober 4: A New Hope

Post Syndicated from Eevee original https://eev.ee/dev/2016/11/01/weekly-roundup-inktober-4-a-new-hope/

Inktober is over! Oh my god.

  • art: Almost the last of the ink drawings of Pokémon, all of them done in fountain pen now. I filled up the sketchbook I’d been using and switched to a 9”×12” one. Much to my surprise, that made the inks take longer.

    I did some final work on that loophole commission from a few weeks ago.

  • irl: I voted, and am quite cross that election news has continued in spite of this fact.

  • doom: I made a few speedmaps — maps based on random themes and made in an hour (or so). It was a fun and enlightening experience, and I’ll definitely do some more of it.

  • blog: I wrote about game accessibility, which touched on those speedmaps.

  • mario maker: One of the level themes I got was “The Wreckage”, and I didn’t know how to speedmap that in Doom in only an hour, but it sounded like an interesting concept for a Mario level.

I managed to catch up on writing by the end of the month (by cheating slightly), so I’m starting fresh in November. The “three big things” obviously went out the window in favor of Inktober, but I’m okay with that. I’ve got something planned for this next month that should make up for it, anyway.

Weekly roundup: Inktober 4: A New Hope

Post Syndicated from Eevee original https://eev.ee/dev/2016/11/01/weekly-roundup-inktober-4-a-new-hope/

Inktober is over! Oh my god.

  • art: Almost the last of the ink drawings of Pokémon, all of them done in fountain pen now. I filled up the sketchbook I’d been using and switched to a 9”×12” one. Much to my surprise, that made the inks take longer.

    I did some final work on that loophole commission from a few weeks ago.

  • irl: I voted, and am quite cross that election news has continued in spite of this fact.

  • doom: I made a few speedmaps — maps based on random themes and made in an hour (or so). It was a fun and enlightening experience, and I’ll definitely do some more of it.

  • blog: I wrote about game accessibility, which touched on those speedmaps.

  • mario maker: One of the level themes I got was “The Wreckage”, and I didn’t know how to speedmap that in Doom in only an hour, but it sounded like an interesting concept for a Mario level.

I managed to catch up on writing by the end of the month (by cheating slightly), so I’m starting fresh in November. The “three big things” obviously went out the window in favor of Inktober, but I’m okay with that. I’ve got something planned for this next month that should make up for it, anyway.