Tag Archives: Projects

HackSpace magazine #1 is out now!

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-1/

HackSpace magazine is finally here! Grab your copy of the new magazine for makers today, and try your hand at some new, exciting skills.

HackSpace magazine issue 1 cover

What is HackSpace magazine?

HackSpace magazine is the newest publication from the team behind The MagPi. Chock-full of amazing projects, tutorials, features, and maker interviews, HackSpace magazine brings together the makers of the world every month, with you — the community — providing the content.

HackSpace magazine is out now!

The new magazine for the modern maker is out now! Learn more at https://hsmag.cc HackSpace magazine is the new monthly magazine for people who love to make things and those who want to learn. Grab some duct tape, fire up a microcontroller, ready a 3D printer and hack the world around you!

Inside issue 1

Fancy smoking bacon with your very own cold smoker? How about protecting your home with a mini trebuchet for your front lawn? Or maybe you’d like to learn from awesome creator Becky Stern how to get paid for making the things you love? No matter whether it’s handheld consoles, robot prosthetics, Christmas projects, or, er, duct tape — whatever your maker passion, issue 1 is guaranteed to tick your boxes!



HackSpace magazine is packed with content from every corner of the maker world: from welding to digital making, and from woodwork to wearables. And whatever you enjoy making, we want to see it! So as you read through this first issue, imagine your favourite homemade projects on our pages, then make that a reality by emailing us the details via [email protected].

Get your copy

You can grab issue 1 of HackSpace magazine right now from WHSmith, Tesco, Sainsbury’s, and independent newsagents. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium and Brazil — ask your local newsagent whether they’ll be getting HackSpace magazine. Alternatively, you can get the new issue online from our store, or digitally via our Android or iOS apps. And don’t forget, as with all our publications, a free PDF of HackSpace magazine is available from release day.

We’re also offering money-saving subscriptions — find details on the the magazine website. And if you’re a subscriber of The MagPi, your free copy of HackSpace magazine is on its way, with details of a super 50% discount on subscriptions! Could this be the Christmas gift you didn’t know you wanted?

Share your makes and thoughts

Make sure to follow HackSpace magazine on Facebook and Twitter, or email the team at [email protected] to tell us about your projects and share your thoughts about issue 1. We’ve loved creating this new magazine for the maker community, and we hope you enjoy it as much as we do.

The post HackSpace magazine #1 is out now! appeared first on Raspberry Pi.

What do you want your button to do?

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/button/

Here at Raspberry Pi, we know that getting physical with computing is often a catalyst for creativity. Building a simple circuit can open up a world of making possibilities! This ethos of tinkering and invention is also being used in the classroom to inspire a whole new generation of makers too, and here is why.

The all-important question

Physical computing provides a great opportunity for creative expression: the button press! By explaining how a button works, how to build one with a breadboard attached to computer, and how to program the button to work when it’s pressed, you can give learners young and old all the conceptual skills they need to build a thing that does something. But what do they want their button to do? Have you ever asked your students or children at home? I promise it will be one of the most mindblowing experiences you’ll have if you do.

A button. A harmless, little arcade button.

Looks harmless now, but put it into the hands of a child and see what happens!

Amy will want her button to take a photo, Charlie will want his button to play a sound, Tumi will want her button to explode TNT in Minecraft, Jack will want their button to fire confetti out of a cannon, and James Robinson will want his to trigger silly noises (doesn’t he always?)! Idea generation is the inherent gift that every child has in abundance. As educators and parents, we’re always looking to deeply engage our young people in the subject matter we’re teaching, and they are never more engaged than when they have an idea and want to implement it. Way back in 2012, I wanted my button to print geeky sayings:

Geek Gurl Diaries Raspberry Pi Thermal Printer Project Sneak Peek!

A sneak peek at the finished Geek Gurl Diaries ‘Box of Geek’. I’ve been busy making this for a few weeks with some help from friends. Tutorial to make your own box coming soon, so keep checking the Geek Gurl Diaries Twitter, facebook page and channel.

What are the challenges for this approach in education?

Allowing this kind of free-form creativity and tinkering in the classroom obviously has its challenges for teachers, especially those confined to rigid lesson structures, timings, and small classrooms. The most common worry I hear from teachers is “what if they ask a question I can’t answer?” Encouraging this sort of creative thinking makes that almost an inevitability. How can you facilitate roughly 30 different projects simultaneously? The answer is by using those other computational and transferable thinking skills:

  • Problem-solving
  • Iteration
  • Collaboration
  • Evaluation

Clearly specifying a problem, surveying the tools available to solve it (including online references and external advice), and then applying them to solve the problem is a hugely important skill, and this is a great opportunity to teach it.

A girl plays a button reaction game at a Raspberry Pi event

Press ALL the buttons!

Hands-off guidance

When we train teachers at Picademy, we group attendees around themes that have come out of the idea generation session. Together they collaborate on an achievable shared goal. One will often sketch something on a whiteboard, decomposing the problem into smaller parts; then the group will divide up the tasks. Each will look online or in books for tutorials to help them with their step. I’ve seen this behaviour in student groups too, and it’s very easy to facilitate. You don’t need to be the resident expert on every project that students want to work on.

The key is knowing where to guide students to find the answers they need. Curating online videos, blogs, tutorials, and articles in advance gives you the freedom and confidence to concentrate on what matters: the learning. We have a number of physical computing projects that use buttons, linked to our curriculum for learners to combine inputs and outputs to solve a problem. The WhooPi cushion and GPIO music box are two of my favourites.

A Raspberry Pi and button attached to a computer display

Outside of formal education, events such as Raspberry Jams, CoderDojos, CAS Hubs, and hackathons are ideal venues for seeking and receiving support and advice.

Cross-curricular participation

The rise of the global maker movement, I think, is in response to abstract concepts and disciplines. Children are taught lots of concepts in isolation that aren’t always relevant to their lives or immediate environment. Digital making provides a unique and exciting way of bridging different subject areas, allowing for cross-curricular participation. I’m not suggesting that educators should throw away all their schemes of work and leave the full direction of the computing curriculum to students. However, there’s huge value in exposing learners to the possibilities for creativity in computing. Creative freedom and expression guide learning, better preparing young people for the workplace of tomorrow.

So…what do you want your button to do?

Hello World

Learn more about today’s subject, and read further articles regarding computer science in education, in Hello World magazine issue 1.

Read Hello World issue 1 for more…

UK-based educators can subscribe to Hello World to receive a hard copy delivered for free to their doorstep, while the PDF is available for free to everyone via the Hello World website.

The post What do you want your button to do? appeared first on Raspberry Pi.

How to Enable Caching for AWS CodeBuild

Post Syndicated from Karthik Thirugnanasambandam original https://aws.amazon.com/blogs/devops/how-to-enable-caching-for-aws-codebuild/

AWS CodeBuild is a fully managed build service. There are no servers to provision and scale, or software to install, configure, and operate. You just specify the location of your source code, choose your build settings, and CodeBuild runs build scripts for compiling, testing, and packaging your code.

A typical application build process includes phases like preparing the environment, updating the configuration, downloading dependencies, running unit tests, and finally, packaging the built artifact.

Downloading dependencies is a critical phase in the build process. These dependent files can range in size from a few KBs to multiple MBs. Because most of the dependent files do not change frequently between builds, you can noticeably reduce your build time by caching dependencies.

In this post, I will show you how to enable caching for AWS CodeBuild.

Requirements

  • Create an Amazon S3 bucket for storing cache archives (You can use existing s3 bucket as well).
  • Create a GitHub account (if you don’t have one).

Create a sample build project:

1. Open the AWS CodeBuild console at https://console.aws.amazon.com/codebuild/.

2. If a welcome page is displayed, choose Get started.

If a welcome page is not displayed, on the navigation pane, choose Build projects, and then choose Create project.

3. On the Configure your project page, for Project name, type a name for this build project. Build project names must be unique across each AWS account.

4. In Source: What to build, for Source provider, choose GitHub.

5. In Environment: How to build, for Environment image, select Use an image managed by AWS CodeBuild.

  • For Operating system, choose Ubuntu.
  • For Runtime, choose Java.
  • For Version,  choose aws/codebuild/java:openjdk-8.
  • For Build specification, select Insert build commands.

Note: The build specification file (buildspec.yml) can be configured in two ways. You can package it along with your source root directory, or you can override it by using a project environment configuration. In this example, I will use the override option and will use the console editor to specify the build specification.

6. Under Build commands, click Switch to editor to enter the build specification.

Copy the following text.

version: 0.2

phases:
  build:
    commands:
      - mvn install
      
cache:
  paths:
    - '/root/.m2/**/*'

Note: The cache section in the build specification instructs AWS CodeBuild about the paths to be cached. Like the artifacts section, the cache paths are relative to $CODEBUILD_SRC_DIR and specify the directories to be cached. In this example, Maven stores the downloaded dependencies to the /root/.m2/ folder, but other tools use different folders. For example, pip uses the /root/.cache/pip folder, and Gradle uses the /root/.gradle/caches folder. You might need to configure the cache paths based on your language platform.

7. In Artifacts: Where to put the artifacts from this build project:

  • For Type, choose No artifacts.

8. In Cache:

  • For Type, choose Amazon S3.
  • For Bucket, choose your S3 bucket.
  • For Path prefix, type cache/archives/

9. In Service role, the Create a service role in your account option will display a default role name.  You can accept the default name or type your own.

If you already have an AWS CodeBuild service role, choose Choose an existing service role from your account.

10. Choose Continue.

11. On the Review page, to run a build, choose Save and build.

Review build and cache behavior:

Let us review our first build for the project.

In the first run, where no cache exists, overall build time would look something like below (notice the time for DOWNLOAD_SOURCE, BUILD and POST_BUILD):

If you check the build logs, you will see log entries for dependency downloads. The dependencies are downloaded directly from configured external repositories. At the end of the log, you will see an entry for the cache uploaded to your S3 bucket.

Let’s review the S3 bucket for the cached archive. You’ll see the cache from our first successful build is uploaded to the configured S3 path.

Let’s try another build with the same CodeBuild project. This time the build should pick up the dependencies from the cache.

In the second run, there was a cache hit (cache was generated from the first run):

You’ll notice a few things:

  1. DOWNLOAD_SOURCE took slightly longer. Because, in addition to the source code, this time the build also downloaded the cache from user’s s3 bucket.
  2. BUILD time was faster. As the dependencies didn’t need to get downloaded, but were reused from cache.
  3. POST_BUILD took slightly longer, but was relatively the same.

Overall, build duration was improved with cache.

Best practices for cache

  • By default, the cache archive is encrypted on the server side with the customer’s artifact KMS key.
  • You can expire the cache by manually removing the cache archive from S3. Alternatively, you can expire the cache by using an S3 lifecycle policy.
  • You can override cache behavior by updating the project. You can use the AWS CodeBuild the AWS CodeBuild console, AWS CLI, or AWS SDKs to update the project. You can also invalidate cache setting by using the new InvalidateProjectCache API. This API forces a new InvalidationKey to be generated, ensuring that future builds receive an empty cache. This API does not remove the existing cache, because this could cause inconsistencies with builds currently in flight.
  • The cache can be enabled for any folders in the build environment, but we recommend you only cache dependencies/files that will not change frequently between builds. Also, to avoid unexpected application behavior, don’t cache configuration and sensitive information.

Conclusion

In this blog post, I showed you how to enable and configure cache setting for AWS CodeBuild. As you see, this can save considerable build time. It also improves resiliency by avoiding external network connections to an artifact repository.

I hope you found this post useful. Feel free to leave your feedback or suggestions in the comments.

Access Resources in a VPC from AWS CodeBuild Builds

Post Syndicated from John Pignata original https://aws.amazon.com/blogs/devops/access-resources-in-a-vpc-from-aws-codebuild-builds/

John Pignata, Startup Solutions Architect, Amazon Web Services

In this blog post we’re going to discuss a new AWS CodeBuild feature that is available starting today. CodeBuild builds can now access resources in a VPC directly without these resources being exposed to the public internet. These resources include Amazon Relational Database Service (Amazon RDS) databases, Amazon ElastiCache clusters, internal services running on Amazon Elastic Compute Cloud (Amazon EC2), and Amazon EC2 Container Service (Amazon ECS), or any service endpoints that are only reachable from within a specific VPC.

CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. As part of the build process, developers often require access to resources that should be isolated from the public Internet. Now CodeBuild builds can be optionally configured to have VPC connectivity and access these resources directly.

Accessing Resources in a VPC

You can configure builds to have access to a VPC when you create a CodeBuild project or you can update an existing CodeBuild project with VPC configuration attributes. Here’s how it looks in the console:

 

To configure VPC connectivity: select a VPC, one or more subnets within that VPC, and one or more VPC security groups that CodeBuild should apply when attaching to your VPC. Once configured, commands running as part of your build will be able to access resources in your VPC without transiting across the public Internet.

Use Cases

The availability of VPC connectivity from CodeBuild builds unlocks many potential uses. For example, you can:

  • Run integration tests from your build against data in an Amazon RDS instance that’s isolated on a private subnet.
  • Query data in an ElastiCache cluster directly from tests.
  • Interact with internal web services hosted on Amazon EC2, Amazon ECS, or services that use internal Elastic Load Balancing.
  • Retrieve dependencies from self-hosted, internal artifact repositories such as PyPI for Python, Maven for Java, npm for Node.js, and so on.
  • Access objects in an Amazon S3 bucket configured to allow access only through a VPC endpoint.
  • Query external web services that require fixed IP addresses through the Elastic IP address of the NAT gateway associated with your subnet(s).

… and more! Your builds can now access any resource that’s hosted in your VPC without any compromise on network isolation.

Internet Connectivity

CodeBuild requires access to resources on the public Internet to successfully execute builds. At a minimum, it must be able to reach your source repository system (such as AWS CodeCommit, GitHub, Bitbucket), Amazon Simple Storage Service (Amazon S3) to deliver build artifacts, and Amazon CloudWatch Logs to stream logs from the build process. The interface attached to your VPC will not be assigned a public IP address so to enable Internet access from your builds, you will need to set up a managed NAT Gateway or NAT instance for the subnets you configure. You must also ensure your security groups allow outbound access to these services.

IP Address Space

Each running build will be assigned an IP address from one of the subnets in your VPC that you designate for CodeBuild to use. As CodeBuild scales to meet your build volume, ensure that you select subnets with enough address space to accommodate your expected number of concurrent builds.

Service Role Permissions

CodeBuild requires new permissions in order to manage network interfaces on your VPCs. If you create a service role for your new projects, these permissions will be included in that role’s policy automatically. For existing service roles, you can edit the policy document to include the additional actions. For the full policy document to apply to your service role, see Advanced Setup in the CodeBuild documentation.

For more information, see VPC Support in the CodeBuild documentation. We hope you find the ability to access internal resources on a VPC useful in your build processes! If you have any questions or feedback, feel free to reach out to us through the AWS CodeBuild forum or leave a comment!

New – Interactive AWS Cost Explorer API

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-interactive-aws-cost-explorer-api/

We launched the AWS Cost Explorer a couple of years ago in order to allow you to track, allocate, and manage your AWS costs. The response to that launch, and to additions that we have made since then, has been very positive. However our customers are, as Jeff Bezos has said, “beautifully, wonderfully, dissatisfied.”

I see this first-hand every day. We launch something and that launch inspires our customers to ask for even more. For example, with many customers going all-in and moving large parts of their IT infrastructure to the AWS Cloud, we’ve had many requests for the raw data that feeds into the Cost Explorer. These customers want to programmatically explore their AWS costs, update ledgers and accounting systems with per-application and per-department costs, and to build high-level dashboards that summarize spending. Some of these customers have been going to the trouble of extracting the data from the charts and reports provided by Cost Explorer!

New Cost Explorer API
Today we are making the underlying data that feeds into Cost Explorer available programmatically. The new Cost Explorer API gives you a set of functions that allow you do everything that I described above. You can retrieve cost and usage data that is filtered and grouped across multiple dimensions (Service, Linked Account, tag, Availability Zone, and so forth), aggregated by day or by month. This gives you the power to start simple (total monthly costs) and to refine your requests to any desired level of detail (writes to DynamoDB tables that have been tagged as production) while getting responses in seconds.

Here are the operations:

GetCostAndUsage – Retrieve cost and usage metrics for a single account or all accounts (master accounts in an organization have access to all member accounts) with filtering and grouping.

GetDimensionValues – Retrieve available filter values for a specified filter over a specified period of time.

GetTags – Retrieve available tag keys and tag values over a specified period of time.

GetReservationUtilization – Retrieve EC2 Reserved Instance utilization over a specified period of time, with daily or monthly granularity plus filtering and grouping.

I believe that these functions, and the data that they return, will give you the ability to do some really interesting things that will give you better insights into your business. For example, you could tag the resources used to support individual marketing campaigns or development projects and then deep-dive into the costs to measure business value. You how have the potential to know, down to the penny, how much you spend on infrastructure for important events like Cyber Monday or Black Friday.

Things to Know
Here are a couple of things to keep in mind as you start to think about ways to make use of the API:

Grouping – The Cost Explorer web application provides you with one level of grouping; the APIs give you two. For example you could group costs or RI utilization by Service and then by Region.

Pagination – The functions can return very large amounts of data and follow the AWS-wide model for pagination by including a nextPageToken if additional data is available. You simply call the same function again, supplying the token, to move forward.

Regions – The service endpoint is in the US East (Northern Virginia) Region and returns usage data for all public AWS Regions.

Pricing – Each API call costs $0.01. To put this into perspective, let’s say you use this API to build a dashboard and it gets 1000 hits per month from your users. Your operating cost for the dashboard should be $10 or so; this is far less expensive than setting up your own systems to extract & ingest the data and respond to interactive queries.

The Cost Explorer API is available now and you can start using it today. To learn more, read about the Cost Explorer API.

Jeff;

Kodi Addon Dev Says “Show of Force” Will Be Met With Defiance

Post Syndicated from Andy original https://torrentfreak.com/kodi-addon-dev-says-show-force-will-met-defiance-171119/

For many years, the members of the MPAA have flexed their muscles all around the globe, working to prevent people from engaging in online piracy. If the last 17 years ‘progress’ is anything to go by, it’s a war that will go on indefinitely.

With Columbia, Disney, Paramount, Twentieth Century Fox, Universal, and Warner on board, the MPAA has historically relied on sheer power to intimidate opponents. That has certainly worked in many large piracy cases but for many peripheral smaller-scale pirates, their presence is largely ignored.

This week, however, several players in the Kodi scene discovered that these giants – and more besides – have the ability to literally turn up at their front door. As reported Thursday, UK-based Kodi addon developer The_Alpha received a hand-delivered cease-and-desist letter from all of the above, accompanied by new faces Netflix, Amazon and Sky TV.

These companies are part of the Alliance for Creativity and Entertainment (ACE), a massive and recently-formed anti-piracy coalition comprised of 30 global entertainment brands. TorrentFreak reached out to The_Alpha for his thoughts on coming under such a dazzling spotlight but perhaps understandably he didn’t want to comment.

The leader of the Ares Project was willing to go on the record, however, after he too received a hand-delivered threat during the week. His decision was to immediately comply and shutdown but TF is informed that others might not be so willing to follow suit.

A Kodi addon developer living in the UK who spoke to us on condition of anonymity told us that most people operating in the scene expected some kind of trouble – just not on this scale.

“Did you see the [company logos] across the top of Alpha’s letter? That’s some serious shit right there. The film companies are no surprise but Amazon delivers my groceries so I don’t expect this shit from them,” he said.

When the ACE partnership was formed earlier this year, it seemed pretty clear that the main drive was towards the pooling of anti-piracy resources to be more effective and efficient. However, it can’t have escaped ACE that such a broad and powerful alliance could also have a profound psychological effect on its adversaries.

“There’s no doubt in my mind that they’re turning up mob-handed to put the shits up people like Alpha and the rest of us,” the developer said. “It’s hardly a fair dust-up is it? What have we got to fight back with, a giro [state benefits]? It’s a show of force, ‘look how important we are’!”

Interestingly, however, the dev told us that it isn’t necessarily the size of the coalition that has him most concerned. What caught his eye was the inclusion of two influential UK-based companies in the alliance.

“Having Sly [a local derogatory nickname for Sky TV] and the Premier League on the letter makes it much more serious to me than seeing Warner or whatever,” he commented.

“I don’t get involved in footie but Sly is everywhere round here and I think it’s something the Brit dev scene might take notice of, even if most say ‘fuck it’ and carry on anyway.”

When questioned whether that’s likely, our source said that while ACE might be able to tackle some of the bigger targets like Ares Project or Colossus, they fundamentally misunderstand how the Kodi scene works.

“If you want a good example of a scattered pirate scene, I give you Kodi. They can bomb the base or whatever but nobody lives there,” he explained.

“There’s some older blokes like me who can do without the stress but a lot of younger coders, builders and YouTubers who thrive on it. They’re used to running around council estates with real-life problems. A faffy letter from some toff in a suit means literally nothing. Like I said, all they have to lose is a giro.”

Whether this is just bravado will remain to be seen, but our earlier discussions with others in the scene indicate a particular weakness in the UK, with many players vulnerable to being found after failing to hide their identities in the past. To a point, our source agrees that this is a problem.

“People are saying that Alpha was found after trying to raise some charity money related to his disabled son but I don’t know for sure and nor does anybody else. What strikes me is that none of us really thought things would get this on top here because all you ever hear about is America this, Canada that, whatever. Does this means that more of us are getting done in England? You tell me,” he said.

Only time will tell but stamping out the pirate Kodi scene is going to be hard work.

Within hours of several projects disappearing Wednesday and Thursday, YouTube and myriad blogs were being flooded with guides detailing immediate replacements. This ad-hoc network of enthusiasts makes the exchange of information happen at an alarming rate and it’s hard to see how any company – no matter how powerful – will ever be able to keep up.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pip: digital creation in your pocket from Curious Chip

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pip-curious-chip/

Get your hands on Pip, the handheld Raspberry Pi–based device for aspiring young coders and hackers from Curious Chip.

A GIF of Pip - Curious Chip - Pip handheld device - Raspberry Pi

Pip is a handheld gaming console from Curios Chip which you can now back on Kickstarter. Using the Raspberry Pi Compute Module 3, Pip allows users to code, hack, and play wherever they are.

We created Pip so that anyone can tinker with technology. From beginners to those who know more — Pip makes it easy, simple, and fun!

For gaming

Pip’s smart design may well remind you of a certain handheld gaming console released earlier this year. With its central screen and detachable side controllers, Pip has a size and shape ideal for gaming.

A GIF of Pip - Curious Chip - Pip handheld device - Raspberry Pi

Those who have used a Raspberry Pi with the Raspbian OS might be familiar with Minecraft Pi, a variant of the popular Minecraft game created specifically for Pi users to play and hack for free. Users of Pip will be able to access Minecraft Pi from the portable device and take their block-shaped creations with them wherever they go.

And if that’s not enough, Pip’s Pi brain allows coders to create their own games using Scratch, in addition to giving access a growing library of games in Curious Chip’s online arcade.

Digital making

Pip’s GPIO pins are easily accessible, so that you can expand upon your digital making skills with physical computing projects. Grab your Pip and a handful of jumper leads, and you will be able to connect and control components such as lights, buttons, servomotors, and more!

A smiling girl with Pip and a laptop

You can also attach any of the range of HAT add-on boards available on the market, such as our own Sense HAT, or ones created by Pimoroni, Adafruit, and others. And if you’re looking to learn a new coding language, you’re in luck: Pip supports Python, HTML/CSS, JavaScript, Lua, and PHP.

Maker Pack and add-ons

Backers can also pledge their funds for additional hardware, such as the Maker Pack, an integrated camera, or a Pip Breadboard Kit.

PipHAT and Breadboard add-ons - Curious Chip - Pip handheld device - Raspberry Pi

The breadboard and the optional PipHAT are also compatible with any Raspberry Pi 2 and 3. Nice!

Curiosity from Curious Chip

Users of Pip can program their device via Curiosity, a tool designed specifically for this handheld device.

Pip’s programming tool is called Curiosity, and it’s hosted on Pip itself and accessed via WiFi from any modern web browser, so there’s no software to download and install. Curiosity allows Pip to be programmed using a number of popular programming languages, including JavaScript, Python, Lua, PHP, and HTML5. Scratch-inspired drag-and-drop block programming is also supported with our own Google Blockly–based editor, making it really easy to access all of Pip’s built-in functionality from a simple, visual programming language.

Back the project

If you’d like to back Curious Chip and bag your own Pip, you can check out their Kickstarter page here. And if you watch their promo video closely, you may see a familiar face from the Raspberry Pi community.

Are you planning on starting your own Raspberry Pi-inspired crowd-funded campaign? Then be sure to tag us on social media. We love to see what the community is creating for our little green (or sometimes blue) computer.

The post Pip: digital creation in your pocket from Curious Chip appeared first on Raspberry Pi.

Ares Kodi Project Calls it Quits After Hollywood Cease & Desist

Post Syndicated from Andy original https://torrentfreak.com/ares-kodi-project-calls-it-quits-after-hollywood-cease-desist-171117/

This week has been particularly bad for those involved in the Kodi addon scene. Following cease-and-desist notices from the MPA-led anti-piracy coalition Alliance for Creativity and Entertainment, several addon developers and repositories shut down.

With Columbia, Disney, Paramount, Twentieth Century Fox, Universal, Warner, Netflix, Amazon and Sky TV all lined up for war, the third-party developers had little choice but to quit. One of those affected was the leader of the hugely popular Ares Project, which quietly disappeared mid-week.

The Ares Wizard was an extremely popular and important piece of software which allowed people to switch Kodi builds, install third-party addons, install popular repositories, change system settings, and carry out backups. It’s installed on huge numbers of machines worldwide but it will soon fall into disrepair.

The mighty Ares Wizard in action

“[This week] I was subject to a hand-delivered notice to cease-and-desist from MPA & ACE,” Ares Project leader Tekto informs TorrentFreak.

“Given the notice, we obviously shut down the repo and wizard as requested.”

The news that Ares Project is done and never coming back will be a huge blow to the community. The project just celebrated its second birthday and has grown exponentially since it first arrived on the scene.

“Ares Project started in Oct 2015. Originally it was to be a tool to setup up the video cache on Kodi correctly. However, many ideas were thrown into the pot and it became a wee bit more; such as a wizard to install community provided builds, common addons and few other tweaks and options,” Tekto says.

“For my own part I started blogging earlier that year as part of a longer-term goal to be self-funding. I always disliked seeing begging bowls out to support ‘server’ costs, many of which were cheap £5-10 per month servers that were used to gain £100s in donations.

“The blog, via affiliate links and ads, could and would provide the funds to cover our hosting costs without resorting to begging for money every weekend.”

Intrigued by this first wave of actions by ACE in Europe, TorrentFreak asked for a copy of the MPA/ACE cease-and-desist notice but unfortunately, Tekto flat-out refused. All he would tell us is that he’d agreed not to give out any copies or screenshots and that he was adhering to that 100%.

That only leaves speculation as to what grounds the MPA/ACE cited for closing the project but to be fair, it doesn’t take much thought to find a direct comparison. Earlier this year, in the BREIN v Filmspeler case, the European Court of Justice (ECJ) ruled that selling “fully-loaded” Kodi boxes amounted to illegally communicating copyrighted content to the public.

With that in mind, it doesn’t take much of a leap to see how this ruling could also apply to someone distributing “fully-loaded” Kodi software builds or addons via a website. It had previously been considered a legal gray area, of course, and it was in that space that the Ares team believed it operated. After all, it took ECJ clarification for local courts in the Netherlands to be satisfied with the legal position.

“There was never any question that what we were doing was illegal. We didn’t and never have hosted any content, we always prevented discussions about illegal paid services, and never sold any devices, pre-loaded or otherwise. That used to be enough to occupy the ‘gray’ area which meant we were safe to develop our applications. That changed in 2017 as we were to discover,” Tekto notes.

Up until this week and apparently oblivious to how the earlier ECJ ruling might affect their operation, things had been going extremely well for Ares. In mid-2016, the group moved to its own support forum that attracted 100,000 signed-up members and 300,000 visitors every month.

“This was quite an achievement in terms of viral marketing but ultimately this would become part of our downfall,” Tekto says.

“The recent innovation of the ‘basket driven’ Ares Portal system seems to have triggered the legal move to shut the project down completely. This simple system gave access to hundreds of add-ons. The system removed the need for builds, blogs and YouTubers – you just shopped on the site for addons and then installed them to your device with a simple 6 digit code.”

While Ares and Tekto still didn’t believe they were doing anything illegal (addons were linked, not hosted) it is now pretty clear to them that the previous gray area has been well and truly closed, at least as far as the MPA/ACE alliance is concerned. And with that in mind, the show is over. Done. Finished.

“We are not criminals or malicious hackers, we weren’t even careful about hiding our identities. You couldn’t meet a more ordinary bunch of folks in truth,” he says.

“There was never any question we would close our doors if what we were doing crossed any boundaries of legality. So with the notice served on us, we are closing our doors and removing all our websites and applications. It’s a sad day in many ways, but nobody wants to be facing court or a potential custodial sentence, for what is essentially a hobby.”

Finally, Tekto says that others like him might want to consider their positions carefully, before they too get a knock at the door. In the meantime, he gives thanks to the project’s supporters, who have remained loyal over the past two years.

“It just leaves me to thank our users for their support and step away from the Kodi scene,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

timeShift(GrafanaBuzz, 1w) Issue 22

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/17/timeshiftgrafanabuzz-1w-issue-22/

Welome to TimeShift

We hope you liked our recent article with videos and slides from the events we’ve participated in recently. With Thanksgiving right around the corner, we’re getting a breather from work-related travel, but only a short one. We have some events in the coming weeks, and of course are busy filling in the details for GrafanaCon EU.

This week we have a lot of articles, videos and presentations to share, as well as some important plugin updates. Enjoy!


Latest Release

Grafana 4.6.2 is now available and includes some bug fixes:

  • Prometheus: Fixes bug with new Prometheus alerts in Grafana. Make sure to download this version if your using Prometheus for alerting. More details in the issue. #9777
  • Color picker: Bug after using textbox input field to change/paste color string #9769
  • Cloudwatch: build using golang 1.9.2 #9667, thanks @mtanda
  • Heatmap: Fixed tooltip for “time series buckets” mode #9332
  • InfluxDB: Fixed query editor issue when using > or < operators in WHERE clause #9871

Download Grafana 4.6.2 Now


From the Blogosphere

Cloud Tech 10 – 13th November 2017 – Grafana, Linux FUSE Adapter, Azure Stack and more!: Mark Whitby is a Cloud Solution Architect at Microsoft UK. Each week he prodcues a video reviewing new developments with Microsoft Azure. This week Mark covers the new Azure Monitoring Plugin we recently announced. He also shows you how to get up and running with Grafana quickly using the Azure Marketplace.

Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes: Oracle published an article on monitoring WebLogic server on Kubernetes. To do this, you’ll use the WebLogic Monitoring Exporter to scrape the server metrics and feed them to Prometheus, then visualize the data in Grafana. Marina goes into a lot of detail and provides sample files and configs to help you get going.

Getting Started with Prometheus: Will Robinson has started a new series on monitoring with Prometheus from someone who has never touched it before. Part 1 introduces a number of monitoring tools and concepts, and helps define a number of monitoring terms. Part 2 teaches you how to spin up Prometheus in a Docker container, and takes a look at writing queries. Looking forward to the third post, when he dives into the visualization aspect.

Monitoring with Prometheus: Alexander Schwartz has made the slides from his most recent presentation from the Continuous Lifcycle Conference in Germany available. In his talk, he discussed getting started with Prometheus, how it differs from other monitoring concepts, and provides examples of how to monitor and alert. We’ll link to the video of the talk when it’s available.

Using Grafana with SiriDB: Jeroen van der Heijden has written an in-depth tutorial to help you visualize data from the open source TSDB, SiriDB in Grafana. This tutorial will get you familiar with setting up SiriDB and provides a sample dashboard to help you get started.

Real-Time Monitoring with Grafana, StatsD and InfluxDB – Artur Caliendo Prado: This is a video from a talk at The Conf, held in Brazil. Artur’s presentation focuses on the experiences they had building a monitoring stack at Youse, how their monitoring became more complex as they scaled, and the platform they built to make sense of their data.

Using Grafana & Inlfuxdb to view XIV Host Performance Metrics – Part 4 Array Stats: This is the fourth part in a series of posts about host performance metrics. This post dives in to array stats to identify workloads and maintain balance across ports. Check out part 1, part 2 and part 3.


GrafanaCon Tickets are Going Fast

Tickets are going fast for GrafanaCon EU, but we still have a seat reserved for you. Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Grafana Plugins

Plugin authors are often adding new features and fixing bugs, which will make your plugin perform better – so it’s important to keep your plugins up to date. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you’re using Hosted Grafana.

UPDATED PLUGIN

Hawkular data source – There is an important change in this release – as this datasource is now able to fetch not only Hawkular Metrics but also Hawkular Alerts, the server URL in the datasource configuration must be updated: http://myserver:123/hawkular/metrics must be changed to http://myserver:123/hawkular

Some of the changes (see the release notes) for more details):

  • Allow per-query tenant configuration
  • Annotations can now be configured out of Availability metrics and Hawkular Alerts events in addition to string metrics
  • allows dot character in tag names

Update

UPDATED PLUGIN

Diagram Panel – This is the first release in a while for the popular Diagram Panel plugin.

In addition to these changes, there are also a number of bug fixes:

Update

UPDATED PLUGIN

Influx Admin Panel – received a number of improvements:

  • Fix issue always showing query results
  • When there is only one row, swap rows/cols (ie: SHOW DIAGNOSTICS)
  • Improved auto-refresh behavior
  • Fix query time sorting
  • show ‘status’ field (killed, etc)

Update


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks and events coming soon. Hope to see you at one of these!

How to Use Open Source Projects for Performance Monitoring | Webinar
Nov. 29, 1pm EST
:
Check out how you can use popular open source projects, for performance monitoring of your Infrastructure, Application, and Cloud faster, easier, and to scale. In this webinar, Daniel Lee from Grafana Labs, and Chris Churilo from InfluxData, will provide you with step by step instruction from download & configure, to collecting metrics and building dashboards and alerts.

RSVP

KubeCon | Austin, TX – Dec. 6-8, 2017: We’re sponsoring KubeCon 2017! This is the must-attend conference for cloud native computing professionals. KubeCon + CloudNativeCon brings together leading contributors in:

  • Cloud native applications and computing
  • Containers
  • Microservices
  • Central orchestration processing
  • And more

Buy Tickets

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and the CFP is now open. There is no need to register; all are welcome. If you’re interested in speaking at FOSDEM, submit your talk now!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

We were glad to be a part of InfluxDays this year, and looking forward to seeing the InfluxData team in NYC in February.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

I enjoy writing these weekly roudups, but am curious how I can improve them. Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Visualising Weather Station data with Initial State

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/initial-state/

Since we launched the Oracle Weather Station project, we’ve collected more than six million records from our network of stations at schools and colleges around the world. Each one of these records contains data from ten separate sensors — that’s over 60 million individual weather measurements!

Weather station measurements in Oracle database - Initial State

Weather station measurements in Oracle database

Weather data collection

Having lots of data covering a long period of time is great for spotting trends, but to do so, you need some way of visualising your measurements. We’ve always had great resources like Graphing the weather to help anyone analyse their weather data.

And from now on its going to be even easier for our Oracle Weather Station owners to display and share their measurements. I’m pleased to announce a new partnership with our friends at Initial State: they are generously providing a white-label platform to which all Oracle Weather Station recipients can stream their data.

Using Initial State

Initial State makes it easy to create vibrant dashboards that show off local climate data. The service is perfect for having your Oracle Weather Station data on permanent display, for example in the school reception area or on the school’s website.

But that’s not all: the Initial State toolkit includes a whole range of easy-to-use analysis tools for extracting trends from your data. Distribution plots and statistics are just a few clicks away!

Humidity value distribution (May-Nov 2017) - Raspberry Pi Oracle Weather Station Initial State

Looks like Auntie Beryl is right — it has been a damp old year! (Humidity value distribution May–Nov 2017)

The wind direction data from my Weather Station supports my excuse as to why I’ve not managed a high-altitude balloon launch this year: to use my launch site, I need winds coming from the east, and those have been in short supply.

Chart showing wind direction over time - Raspberry Pi Oracle Weather Station Initial State

Chart showing wind direction over time

Initial State credientials

Every Raspberry Pi Oracle Weather Station school will shortly be receiving the credentials needed to start streaming their data to Initial State. If you’re super keen though, please email [email protected] with a photo of your Oracle Weather Station, and I’ll let you jump the queue!

The Initial State folks are big fans of Raspberry Pi and have a ton of Pi-related projects on their website. They even included shout-outs to us in the music video they made to celebrate the publication of their 50th tutorial. Can you spot their weather station?

Your home-brew weather station

If you’ve built your own Raspberry Pi–powered weather station and would like to dabble with the Initial State dashboards, you’re in luck! The team at Initial State is offering 14-day trials for everyone. For more information on Initial State, and to sign up for the trial, check out their website.

The post Visualising Weather Station data with Initial State appeared first on Raspberry Pi.

Staying Busy Between Code Pushes

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/11/16/staying-busy-between-code-pushes/

Staying Busy Between Code Pushes.

Maintaining a regular cadence of pushing out releases, adding new features, implementing bug fixes and staying on top of support requests is important for any software to thrive; but especially important for open source software due to its rapid pace. It’s easy to lose yourself in code and forget that events are happening all the time – in every corner of the world, where we can learn, share knowledge, and meet like-minded individuals to build better software, together. There are so many amazing events we’d like to participate in, but there simply isn’t enough time (or budget) to fit them all in. Here’s what we’ve been up to recently; between code pushes.

Recent Events

Øredev Conference | Malmö, Sweden: Øredev is one of the biggest developer conferences in Scandinavia, and Grafana Labs jumped at the chance to be a part of it. In early November, Grafana Labs Principal Developer, Carl Bergquist, gave a great talk on “Monitoring for Everyone”, which discussed the concepts of monitoring and why everyone should care, different ways to monitor your systems, extending your monitoring to containers and microservices, and finally what to monitor and alert on. Watch the video of his talk below.

InfluxDays | San Francisco, CA: Dan Cech, our Director of Platform Services, spoke at InfluxDays in San Francisco on Nov 14, and Grafana Labs sponsored the event. InfluxDB is a popular data source for Grafana, so we wanted to connect to the InfluxDB community and show them how to get the most out of their data. Dan discussed building dashboards, choosing the best panels for your data, setting up alerting in Grafana and a few sneak peeks of the upcoming Grafana 5.0. The video of his talk is forthcoming, but Dan has made his presentation available.

PromCon | Munich, Germany: PromCon is the Prometheus-focused event of the year. In August, Carl Bergquist, had the opportunity to speak at PromCon and take a deep dive into Grafana and Prometheus. Many attendees at PromCon were already familiar with Grafana, since it’s the default dashboard tool for Prometheus, but Carl had a trove of tricks and optimizations to share. He also went over some major changes and what we’re currently working on.

CNCF Meetup | New York, NY: Grafana Co-founder and CEO, Raj Dutt, particpated in a panel discussion with the folks of Packet and the Cloud Native Computing Foundation. The discussion focused on the success stories, failures, rationales and in-the-trenches challenges when running cloud native in private or non “public cloud” datacenters (bare metal, colocation, private clouds, special hardware or networking setups, compliance and security-focused deployments).

Percona Live | Dublin: Daniel Lee traveled to Dublin, Ireland this fall to present at the database conference Percona Live. There he showed the new native MySQL support, along with a number of upcoming features in Grafana 5.0. His presentation is available to download.

Big Monitoring Meetup | St. Petersburg, Russian Federation: Alexander Zobnin, our developer located in Russia, is the primary maintainer of our popular Zabbix plugin. He attended the Big Monitoring Meetup to discuss monitoring, Grafana dashboards and democratizing metrics.

Why observability matters – now and in the future | Webinar: Our own Carl Bergquist and Neil Gehani, Director of Product at Weaveworks, to discover best practices on how to get started with monitoring both your application and infrastructure. Start capturing metrics that matter, aggregate and visualize them in a useful way that allows for identifying bottlenecks and proactively preventing incidents. View Carl’s presentation.

Upcoming Events

We’re going to maintain this momentum with a number of upcoming events, and hope you can join us.

KubeCon | Austin, TX – Dec. 6-8, 2017: We’re sponsoring KubeCon 2017! This is the must-attend conference for cloud native computing professionals. KubeCon + CloudNativeCon brings together leading contributors in:

  • Cloud native applications and computing
  • Containers
  • Microservices
  • Central orchestration processing
  • And more.

Buy Tickets

How to Use Open Source Projects for Performance Monitoring | Webinar
Nov. 29, 1pm EST:
Check out how you can use popular open source projects, for performance monitoring of your Infrastructure, Application, and Cloud faster, easier, and to scale. In this webinar, Daniel Lee from Grafana Labs, and Chris Churilo from InfluxData, will provide you with step by step instruction from download & configure, to collecting metrics and building dashboards and alerts.

RSVP

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and the CFP is now open. There is no need to register; all are welcome. If you’re interested in speaking at FOSDEM, submit your talk now!

GrafanaCon EU

Last, but certainly not least, the next GrafanaCon is right around the corner. GrafanaCon EU (to be held in Amsterdam, Netherlands, March 1-2. 2018),is a two-day event with talks centered around Grafana and the surrounding ecosystem. In addition to the latest features and functionality of Grafana, you can expect to see and hear from members of the monitoring community like Graphite, Prometheus, InfluxData, Elasticsearch Kubernetes, and more. Head to grafanacon.org to see the latest speakers confirmed. We have speakers from Automattic, Bloomberg, CERN, Fastly, Tinder and more!

Conclusion

The Grafana Labs team is spread across the globe. Having a “post-geographic” company structure give us the opportunity to take part in events wherever they may be held in the world. As our team continues to grow, we hope to take part in even more events, and hope you can find the time to join us.

Brand new and blue: our Brazilian Raspberry Pi 3

Post Syndicated from Mike Buffham original https://www.raspberrypi.org/blog/raspberry-pi-brazil/

Programa de revendedor aprovado agora no Brasil — our Approved Reseller programme is live in Brazil, with Anatel-approved Raspberry Pis in a rather delicious shade of blue on sale from today.

A photo of the blue-variant Raspberry Pi 3

Blue Raspberry is more than just the best Jolly Ranger flavour

The challenge

The difficulty in buying our products — and the lack of Anatel certification — have been consistent points of feedback from our many Brazilian customers and followers. In much the same way that electrical products in the USA must be FCC-approved in order to be produced or sold there, products sold in Brazil must be approved by Anatel. And so we’re pleased to tell you that the Raspberry Pi finally has this approval.

Blue Raspberry

Today we’re also announcing the appointment of our first Approved Reseller in Brazil: FilipeFlop will be able to sell Raspberry Pi 3 units across the country.

Filipeflop logo - Raspberry Pi Brazil

A big shout-out to the team at FilipeFlop that has worked so hard with us to ensure that we’re getting the product on sale in Brazil at the right price. (They also helped us understand the various local duties and taxes which need to be paid!)

Please note: the blue colouring of the Raspberry Pi 3 sold in Brazil is the only difference between it and the standard green model. People outside Brazil will not be able to purchase the blue variant from FilipeFlop.

More Raspberry Pi Approved Resellers

Raspberry Pi Approved Reseller logo - Raspberry Pi Brazil

Since first announcing it back in August, we have further expanded our Approved Reseller programme by adding resellers for Austria, Canada, Cyprus, Czech Republic, Denmark, Estonia, Finland, Germany, Latvia, Lithuania, Norway, Poland, Slovakia, Sweden, Switzerland, and the US. All Approved Resellers are listed on our products page, and more will follow over the next few weeks!

Make and share

If you’re based in Brazil and you’re ordering the new, blue Raspberry Pi, make sure to share your projects with us on social media. We can’t wait to see what you get up to with them!

The post Brand new and blue: our Brazilian Raspberry Pi 3 appeared first on Raspberry Pi.

I Still Prefer Eclipse Over IntelliJ IDEA

Post Syndicated from Bozho original https://techblog.bozho.net/still-prefer-eclipse-intellij-idea/

Over the years I’ve observed an inevitable shift from Eclipse to IntelliJ IDEA. Last year they were almost equal in usage, and I have the feeling things are swaying even more towards IDEA.

IDEA is like the iPhone of IDEs – its users tell you that “you will feel how much better it is once you get used to it”, “are you STILL using Eclipse??”, “IDEA is so much better, I thought everyone has switched”, etc.

I’ve been using mostly Eclipse for the past 12 years, but in some cases I did use IDEA – when I was writing Scala, when I was writing Android, and most recently – when Eclipse failed to be ready for the Java 9 release, so after half a day of trying to get it working, I just switched to IDEA until Eclipse finally gets a working Java 9 version (with Maven and the rest of the stuff).

But I will get back to Eclipse again, soon. And I still prefer it. Not just because of all the key combinations I’ve internalized (you can reuse those in IDEA), but because there are still things I find worse in IDEA. Of course, IDEA has so much more cool features like code improvement suggestions and actually working plugins for everything. But at least some of the problems I see have to do with the more basic development workflow and experience. And you can’t compensate for those with sugarcoating. So here they are:

  • Projects are not automatically built (by default), so you can end up with compilation errors that you don’t see until you open a non-compiling file or run a build. And turning the autobild on makes my machine crawl. I know I need an upgrade, but that’s not the point – not having “build on change” was a huge surprise to me the first time I tried IDEA. I recently complained about that on twitter and it turns out “it’s a feature”. The rationale seems to be that if you use refactoring, that shouldn’t happen. Well, there are dozens of cases when it does happen. Refactoring by adding a method parameter, by changing the type of a parameter, by removing a parameter (where the IDE can’t infer which parameter is removed based on the types), by changing return types. Also, a change in maven/gradle dependencies may introduces compilation issues that you don’t get to see. This is not a reasonable default at all, and I think the performance issues are the only reason it’s still the default. I think this makes the experience much worse.
  • You can have only one project per screen. Maybe there are those small companies with greenfield projects where you only need one. But I’ve never been in a situation, where you don’t at least occasionally need a separate project. Be it an “experiments” one, a “tools” one, or whatever. And no, multi-module maven projects (which IDEA handles well) are not sufficient. So each time you need to step out of your main project, you launch another screen. Apart from the bad usability, it’s double the memory, double the fun.
  • Speaking of memory, It seems to be taking more memory than Eclipse. I don’t have representative benchmarks of that, and I know that my 8 GB RAM home machine is way to small for development nowadays, but still.
  • It feels less responsive and clunky. There is some minor delay that I can’t define well, but “I feel it”. I read somewhere that they were excessively repainting the screen elements, so that might be the explanation. Eclipse feels smoother (I know that’s not a proper argument, but I can’t be more precise)
  • Due to some extra cleverness, I have “unused methods” and “never assigned fields” all around the project. It uses spring, so these methods and fields are controller methods and autowired fields. Maybe some spring plugin would take care of that, but spring is not the only framework that uses reflection. Even getters and setters on POJOs get the unused warnings. What’s the problem with those warnings? That warnings are devalued. They don’t mean anything now. There isn’t a “yellow” indicator on the class either, so you don’t actually see the amount of warnings you have. Eclipse displays warnings better, and the false positives are much less.
  • The call hierarchy is slightly worse. But since that’s the most important IDE feature for me (alongside refactoring), it matters. It doesn’t give you the call hierarchy of default constructors that are not explicitly defined. Also, from what I’ve seen IDEA users don’t often use the call hierarchy feature. “Find usage” I think predates the call hierarchy, and is also much more visible through the UI, so some of the IDEA users don’t even know what a call hierarchy is. And repeatedly do “find usage”. That’s only partly the IDE’s fault.
  • No search in the output console. Come one, why I do I have an IDE, where I have to copy the output and paste it in a text editor in order to search. Now, to clarify, the console does have search. But when I run my (spring-boot) application, it outputs stuff in a panel at the bottom that is not the console and doesn’t have search.
  • CTRL+arrows by default jumps over whole words, and not camel cased words. This is configurable, but is yet another odd default. You almost always want to be able to traverse your variables word by word (in camel case), rather than skipping over the whole variable (method/class) name.
  • A few years ago when I used it for Scala, the project never actually compiled. But I guess that’s more Scala’s fault than of the IDE

Apart from the first two, the rest are not major issues, I agree. But they add up. Ultimately, it’s a matter of personal choice whether you can turn a blind eye to these issues. But I’m getting back to Eclipse again. At some point I will propose improvements in the IntelliJ IDEA backlog and will check it again in a few years, I guess.

The post I Still Prefer Eclipse Over IntelliJ IDEA appeared first on Bozho's tech blog.

Visualize AWS Cloudtrail Logs using AWS Glue and Amazon Quicksight

Post Syndicated from Luis Caro Perez original https://aws.amazon.com/blogs/big-data/streamline-aws-cloudtrail-log-visualization-using-aws-glue-and-amazon-quicksight/

Being able to easily visualize AWS CloudTrail logs gives you a better understanding of how your AWS infrastructure is being used. It can also help you audit and review AWS API calls and detect security anomalies inside your AWS account. To do this, you must be able to perform analytics based on your CloudTrail logs.

In this post, I walk through using AWS Glue and AWS Lambda to convert AWS CloudTrail logs from JSON to a query-optimized format dataset in Amazon S3. I then use Amazon Athena and Amazon QuickSight to query and visualize the data.

Solution overview

To process CloudTrail logs, you must implement the following architecture:

CloudTrail delivers log files in an Amazon S3 bucket folder. To correctly crawl these logs, you modify the file contents and folder structure using an Amazon S3-triggered Lambda function that stores the transformed files in an S3 bucket single folder. When the files are in a single folder, AWS Glue scans the data, converts it into Apache Parquet format, and catalogs it to allow for querying and visualization using Amazon Athena and Amazon QuickSight.

Walkthrough

Let’s look at the steps that are required to build the solution.

Set up CloudTrail logs

First, you need to set up a trail that delivers log files to an S3 bucket. To create a trail in CloudTrail, follow the instructions in Creating a Trail.

When you finish, the trail settings page should look like the following screenshot:

In this example, I set up log files to be delivered to the cloudtraillfcaro bucket.

Consolidate CloudTrail reports into a single folder using Lambda

AWS CloudTrail delivers log files using the following folder structure inside the configured Amazon S3 bucket:

AWSLogs/ACCOUNTID/CloudTrail/REGION/YEAR/MONTH/HOUR/filename.json.gz

Additionally, log files have the following structure:

{
    "Records": [{
        "eventVersion": "1.01",
        "userIdentity": {
            "type": "IAMUser",
            "principalId": "AIDAJDPLRKLG7UEXAMPLE",
            "arn": "arn:aws:iam::123456789012:user/Alice",
            "accountId": "123456789012",
            "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
            "userName": "Alice",
            "sessionContext": {
                "attributes": {
                    "mfaAuthenticated": "false",
                    "creationDate": "2014-03-18T14:29:23Z"
                }
            }
        },
        "eventTime": "2014-03-18T14:30:07Z",
        "eventSource": "cloudtrail.amazonaws.com",
        "eventName": "StartLogging",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "72.21.198.64",
        "userAgent": "signin.amazonaws.com",
        "requestParameters": {
            "name": "Default"
        },
        "responseElements": null,
        "requestID": "cdc73f9d-aea9-11e3-9d5a-835b769c0d9c",
        "eventID": "3074414d-c626-42aa-984b-68ff152d6ab7"
    },
    ... additional entries ...
    ]

If AWS Glue crawlers are used to catalog these files as they are written, the following obstacles arise:

  1. AWS Glue identifies different tables per different folders because they don’t follow a traditional partition format.
  2. Based on the structure of the file content, AWS Glue identifies the tables as having a single column of type array.
  3. CloudTrail logs have JSON attributes that use uppercase letters. According to the Best Practices When Using Athena with AWS Glue, it is recommended that you convert these to lowercase.

To have AWS Glue catalog all log files in a single table with all the columns describing each event, implement the following Lambda function:

from __future__ import print_function
import json
import urllib
import boto3
import gzip

s3 = boto3.resource('s3')
client = boto3.client('s3')

def convertColumntoLowwerCaps(obj):
    for key in obj.keys():
        new_key = key.lower()
        if new_key != key:
            obj[new_key] = obj[key]
            del obj[key]
    return obj


def lambda_handler(event, context):

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    print(bucket)
    print(key)
    try:
        newKey = 'flatfiles/' + key.replace("/", "")
        client.download_file(bucket, key, '/tmp/file.json.gz')
        with gzip.open('/tmp/out.json.gz', 'w') as output, gzip.open('/tmp/file.json.gz', 'rb') as file:
            i = 0
            for line in file: 
                for record in json.loads(line,object_hook=convertColumntoLowwerCaps)['records']:
            		if i != 0:
            		    output.write("\n")
            		output.write(json.dumps(record))
            		i += 1
        client.upload_file('/tmp/out.json.gz', bucket,newKey)
        return "success"
    except Exception as e:
        print(e)
        print('Error processing object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e

The function goes over each element of the records array, changes uppercase letters to lowercase in column names, and inserts each element of the array as a single line of a new file. The new file is saved inside a flatfiles folder created by the function without any subfolders in the S3 bucket.

The function should have a role containing a policy with at least the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::cloudtraillfcaro/*",
                "arn:aws:s3:::cloudtraillfcaro"
            ],
            "Effect": "Allow"
        }
    ]
}

In this example, CloudTrail delivers logs to the cloudtraillfcaro bucket. Make sure that you replace this name with your bucket name in the policy. For more information about how to work with inline policies, see Working with Inline Policies.

After the Lambda function is created, you can set up the following trigger using the Triggers tab on the AWS Lambda console.

Choose Add trigger, and choose S3 as a source of the trigger.

After choosing the source, configure the following settings:

In the trigger, any file that is written to the path for the log files—which in this case is AWSLogs/119582755581/CloudTrail/—is processed. Make sure that the Enable trigger check box is selected and that the bucket and prefix parameters match your use case.

After you set up the function and receive log files, the bucket (in this case cloudtraillfcaro) should contain the processed files inside the flatfiles folder.

Catalog source data

Once the files are processed by the Lambda function, set up a crawler named cloudtrail to catalog them.

The crawler must point to the flatfiles folder.

All the crawlers and AWS Glue jobs created for this solution must have a role with the AWSGlueServiceRole managed policy and an inline policy with permissions to modify the S3 buckets used on the Lambda function. For more information, see Working with Managed Policies.

The role should look like the following:

In this example, the inline policy named s3perms contains the permissions to modify the S3 buckets.

After you choose the role, you can schedule the crawler to run on demand.

A new database is created, and the crawler is set to use it. In this case, the cloudtrail database is used for all the tables.

After the crawler runs, a single table should be created in the catalog with the following structure:

The table should contain the following columns:

Create and run the AWS Glue job

To convert all the CloudTrail logs to a columnar store in Parquet, set up an AWS Glue job by following these steps.

Upload the following script into a bucket in Amazon S3:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import boto3
import time

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "cloudtrail", table_name = "flatfiles", transformation_ctx = "datasource0")
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "make_struct", transformation_ctx = "resolvechoice1")
relationalized1 = resolvechoice1.relationalize("trail", args["TempDir"]).select("trail")
datasink = glueContext.write_dynamic_frame.from_options(frame = relationalized1, connection_type = "s3", connection_options = {"path": "s3://cloudtraillfcaro/parquettrails"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

In the example, you load the script as a file named cloudtrailtoparquet.py. Make sure that you modify the script and update the “{"path": "s3://cloudtraillfcaro/parquettrails"}” with the destination in which you want to store your results.

After uploading the script, add a new AWS Glue job. Choose a name and role for the job, and choose the option of running the job from An existing script that you provide.

To avoid processing the same data twice, enable the Job bookmark setting in the Advanced properties section of the job properties.

Choose Next twice, and then choose Finish.

If logs are already in the flatfiles folder, you can run the job on demand to generate the first set of results.

Once the job starts running, wait for it to complete.

When the job is finished, its Run status should be Succeeded. After that, you can verify that the Parquet files are written to the Amazon S3 location.

Catalog results

To be able to process results from Athena, you can use an AWS Glue crawler to catalog the results of the AWS Glue job.

In this example, the crawler is set to use the same database as the source named cloudtrail.

You can run the crawler using the console. When the crawler finishes running and has processed the Parquet results, a new table should be created in the AWS Glue Data Catalog. In this example, it’s named parquettrails.

The table should have the classification set to parquet.

It should have the same columns as the flatfiles table, with the exception of the struct type columns, which should be relationalized into several columns:

In this example, notice how the requestparameters column, which was a struct in the original table (flatfiles), was transformed to several columns—one for each key value inside it. This is done using a transformation native to AWS Glue called relationalize.

Query results with Athena

After crawling the results, you can query them using Athena. For example, to query what events took place in the time frame between 2017-10-23t12:00:00 and 2017-10-23t13:00, use the following select statement:

select *
from cloudtrail.parquettrails
where eventtime > '2017-10-23T12:00:00Z' AND eventtime < '2017-10-23T13:00:00Z'
order by eventtime asc;

Be sure to replace cloudtrail.parquettrails with the names of your database and table that references the Parquet results. Replace the datetimes with an hour when your account had activity and was processed by the AWS Glue job.

Visualize results using Amazon QuickSight

Once you can query the data using Athena, you can visualize it using Amazon QuickSight. Before connecting Amazon QuickSight to Athena, be sure to grant QuickSight access to Athena and the associated S3 buckets in your account. For more information, see Managing Amazon QuickSight Permissions to AWS Resources. You can then create a new data set in Amazon QuickSight based on the Athena table that you created.

After setting up permissions, you can create a new analysis in Amazon QuickSight by choosing New analysis.

Then add a new data set.

Choose Athena as the source.

Give the data source a name (in this case, I named it cloudtrail).

Choose the name of the database and the table referencing the Parquet results.

Then choose Visualize.

After that, you should see the following screen:

Now you can create some visualizations. First, search for the sourceipaddress column, and drag it to the AutoGraph section.

You can see a list of the IP addresses that you have used to interact with AWS. To review whether these IP addresses have been used from IAM users, internal AWS services, or roles, use the type value that is inside the useridentity field of the original log files. Thanks to the relationalize transformation, this value is available as the useridentity.type column. After the column is added into the Group/Color box, the visualization should look like the following:

You can now see and distinguish the most used IPs and whether they are used from roles, AWS services, or IAM users.

After following all these steps, you can use Amazon QuickSight to add different columns from CloudTrail and perform different types of visualizations. You can build operational dashboards that continuously monitor AWS infrastructure usage and access. You can share those dashboards with others in your organization who might need to see this data.

Summary

In this post, you saw how you can use a simple Lambda function and an AWS Glue script to convert text files into Parquet to improve Athena query performance and data compression. The post also demonstrated how to use AWS Lambda to preprocess files in Amazon S3 and transform them into a format that is recognizable by AWS Glue crawlers.

This example, used AWS CloudTrail logs, but you can apply the proposed solution to any set of files that after preprocessing, can be cataloged by AWS Glue.


Additional Reading

Learn how to Harmonize, Query, and Visualize Data from Various Providers using AWS Glue, Amazon Athena, and Amazon QuickSight.


About the Authors

Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Computing in schools: the report card

Post Syndicated from Philip Colligan original https://www.raspberrypi.org/blog/after-the-reboot/

Today the Royal Society published After the Reboot, a report card on the state of computing education in UK schools. It’s a serious piece of work, published with lots of accompanying research and data, and well worth a read if you care about these issues (which, if you’re reading this blog, I guess you do).

The headline message is that, while a lot has been achieved, there’s a long way to go before we can say that young people are consistently getting the computing education they need and deserve in UK schools.

If this were a school report card, it would probably say: “good progress when he applies himself, but would benefit from more focus and effort in class” (which is eerily reminiscent of my own school reports).

A child coding in Scratch on a laptop - Royal Society After the Reboot

Good progress

After the Reboot comes five and a half years after the Royal Society’s first review of computing education, Shut down or restart, a report that was published just a few days before the Education Secretary announced in January 2012 that he was scrapping the widely discredited ICT programme of study.

There’s no doubt that a lot has been achieved since 2012, and the Royal Society has done a good job of documenting those successes in this latest report. Computing is now part of the curriculum for all schools. There’s a Computer Science GCSE that is studied by thousands of young people. Organisations like Computing At School have built a grassroots movement of educators who are leading fantastic work in schools up and down the country. Those are big wins.

The Raspberry Pi Foundation has been playing its part. With the support of partners like Google, we’ve trained over a thousand UK educators through our Picademy programme. Those educators have gone on to work with hundreds of thousands of students, and many have become leaders in the field. Many thousands more have taken our free online training courses, and through our partnership with BT, CAS and the BCS on the Barefoot programme, we’re supporting thousands of primary school teachers to deliver the computing curriculum. Earlier this year we launched a free magazine for computing educators, Hello World, which has over 14,000 subscribers after just three editions.

A group of people learning about digital making - Royal Society After the Reboot

More to do

Despite all the progress, the Royal Society study has confirmed what many of us have been saying for some time: we need to do much more to support teachers to develop the skills and confidence to deliver the computing curriculum. More than anything, we need to give them the time to invest in their own professional development. The UK led the way on putting computing in the curriculum. Now we need to follow through on that promise by investing in a huge effort to support professional development across the school system.

This isn’t a problem that any one organisation or sector can solve on its own. It will require a grand coalition of government, industry, non-profits, and educators if we are going to make change at the pace that our young people need and deserve. Over the coming weeks and months, we’ll be working with our partners to figure out how we make that happen.

A boy learning about computing from a woman - Royal Society After the Reboot

The other 75%

While the Royal Society report rightly focuses on what happens in classrooms during the school day, we need to remember that children spend only 25% of their waking hours there. What about the other 75%?

Ask any computer scientist, engineer, or maker, and they’ll tell stories about how much they learned in those precious discretionary hours.

Ask an engineer of a certain age (ahem), and they will tell you about the local computing club where they got hands-on with new technologies, picked up new ideas, and were given help by peers and mentors. They might also tell you how they would spend dozens of hours typing in hundreds of line of code from a magazine to create their own game, and dozens more debugging when it didn’t work.

One of our goals at the Raspberry Pi Foundation is to lead the revival in that culture of informal learning.

The revival of computing clubs

There are now more than 6,000 active Code Clubs in the UK, engaging over 90,000 young people each week. 41% of the kids at Code Club are girls. More than 150 UK CoderDojos take place in universities, science centres, and corporate offices, providing a safe space for over 4,000 young people to learn programming and digital making.

So far this year, there have been 164 Raspberry Jams in the UK, volunteer-led meetups attended by over 10,000 people, who come to learn from volunteers and share their digital making projects.

It’s a movement, and it’s growing fast. One of the most striking facts is that whenever a new Code Club, CoderDojo, or Raspberry Jam is set up, it is immediately oversubscribed.

So while we work on fixing the education system, there’s a tangible way that we can all make a huge difference right now. You can help set up a Code Club, get involved with CoderDojo, or join the Raspberry Jam movement.

The post Computing in schools: the report card appeared first on Raspberry Pi.

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/say-hello-to-our-newest-aws-community-heroes-fall-2017-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

B2 Cloud Storage Roundup

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/b2-cloud-storage-roundup/

B2 Integrations
Over the past several months, B2 Cloud Storage has continued to grow like we planted magic beans. During that time we have added a B2 Java SDK, and certified integrations with GoodSync, Arq, Panic, UpdraftPlus, Morro Data, QNAP, Archiware, Restic, and more. In addition, B2 customers like Panna Cooking, Sermon Audio, and Fellowship Church are happy they chose B2 as their cloud storage provider. If any of that sounds interesting, read on.

The B2 Java SDK

While the Backblaze B2 API is well documented and straight-forward to implement, we were asked by a few of our Integration Partners if we had an SDK they could use. So we developed one as an open-course project on GitHub, where we hope interested parties will not only use our Java SDK, but make it better for everyone else.

There are different reasons one might use the Java SDK, but a couple of areas where the SDK can simplify the coding process are:

Expiring Authorization — B2 requires an application key for a given account be reissued once a day when using the API. If the application key expires while you are in the middle of transferring files or some other B2 activity (bucket list, etc.), the SDK can be used to detect and then update the application key on the fly. Your B2 related activities will continue without incident and without having to capture and code your own exception case.

Error Handling — There are different types of error codes B2 will return, from expired application keys to detecting malformed requests to command time-outs. The SDK can dramatically simplify the coding needed to capture and account for the various things that can happen.

While Backblaze has created the Java SDK, developers in the GitHub community have also created other SDKs for B2, for example, for PHP (https://github.com/cwhite92/b2-sdk-php,) and Go (https://github.com/kurin/blazer.) Let us know in the comments about other SDKs you’d like to see or perhaps start your own GitHub project. We will publish any updates in our next B2 roundup.

What You Can Do with Affordable and Available Cloud Storage

You’re probably aware that B2 is up to 75% less expensive than other similar cloud storage services like Amazon S3 and Microsoft Azure. Businesses and organizations are finding that projects that previously weren’t economically feasible with other Cloud Storage services are now not only possible, but a reality with B2. Here are a few recent examples:

SermonAudio logo SermonAudio wanted their media files to be readily available, but didn’t want to build and manage their own internal storage farm. Until B2, cloud storage was just too expensive to use. Now they use B2 to store their audio and video files, and also as the primary source of downloads and streaming requests from their subscribers.
Fellowship Church logo Fellowship Church wanted to escape from the ever increasing amount of time they were spending saving their data to their LTO-based system. Using B2 saved countless hours of personnel time versus LTO, fit easily into their video processing workflow, and provided instant access at any time to their media library.
Panna logo Panna Cooking replaced their closet full of archive hard drives with a cost-efficient hybrid-storage solution combining 45Drives and Backblaze B2 Cloud Storage. Archived media files that used to take hours to locate are now readily available regardless of whether they reside in local storage or in the B2 Cloud.

B2 Integrations

Leading companies in backup, archive, and sync continue to add B2 Cloud Storage as a storage destination for their customers. These companies realize that by offering B2 as an option, they can dramatically lower the total cost of ownership for their customers — and that’s always a good thing.

If your favorite application is not integrated to B2, you can do something about it. One integration partner told us they received over 200 customer requests for a B2 integration. The partner got the message and the integration is currently in beta test.

Below are some of the partner integrations completed in the past few months. You can check the B2 Partner Integrations page for a complete list.

Archiware — Both P5 Archive and P5 Backup can now store data in the B2 Cloud making your offsite media files readily available while keeping your off-site storage costs predictable and affordable.

Arq — Combine Arq and B2 for amazingly affordable backup of external drives, network drives, NAS devices, Windows PCs, Windows Servers, and Macs to the cloud.

GoodSync — Automatically synchronize and back up all your photos, music, email, and other important files between all your desktops, laptops, servers, external drives, and sync, or back up to B2 Cloud Storage for off-site storage.

QNAP — QNAP Hybrid Backup Sync consolidates backup, restoration, and synchronization functions into a single QTS application to easily transfer your data to local, remote, and cloud storage.

Morro Data — Their CloudNAS solution stores files in the cloud, caches them locally as needed, and syncs files globally among other CloudNAS systems in an organization.

Restic – Restic is a fast, secure, multi-platform command line backup program. Files are uploaded to a B2 bucket as de-duplicated, encrypted chunks. Each backup is a snapshot of only the data that has changed, making restores of a specific date or time easy.

Transmit 5 by Panic — Transmit 5, the gold standard for macOS file transfer apps, now supports B2. Upload, download, and manage files on tons of servers with an easy, familiar, and powerful UI.

UpdraftPlus — WordPress developers and admins can now use the UpdraftPlus Premium WordPress plugin to affordably back up their data to the B2 Cloud.

Getting Started with B2 Cloud Storage

If you’re using B2 today, thank you. If you’d like to try B2, but don’t know where to start, here’s a guide to getting started with the B2 Web Interface — no programming or scripting is required. You get 10 gigabytes of free storage and 1 gigabyte a day in free downloads. Give it a try.

The post B2 Cloud Storage Roundup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Build a Flick-controlled marble maze

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/flick-marble-maze/

Wiggle your fingers to guide a ball through a 3D-printed marble maze using the Pi Supply Flick board for Raspberry Pi!

Wiggle, wiggle, wiggle, wiggle, yeah

Using the Flick, previously seen in last week’s Hacker House’s gesture-controlled holographic visualiser, South Africa–based Tom Van den Bon has created a touch-free marble maze. He was motivated by, if his Twitter is any indication, his love for game-making and 3D printing.

Tom Van den Bon on Twitter

Day 172 of #3dprint365. #3dprinted Raspberry PI Controlled Maze Thingie Part 3 #3dprint #3dprinter #thingiverse #raspberrypi #pisupply

All non-electronic parts of this build are 3D printed. The marble maze sits atop a motorised structure which moves along two axes thanks to servo motors. Tom controls the movement using gestures which are picked up by the Flick Zero, a Pi Zero–sized 3D-tracking board that can detect movement up to 15cm away.

Find the code for the maze, which takes advantage of the Flick library, on Tom’s GitHub account.

Make your own games

Our free resources are a treasure trove of fun home-brew games that you can build with your friends and family.

If you like physical games such as Tom’s gesture-controlled maze, you should definitely check out our Python quick reaction game! In it, players are pitted against each other to react as quickly as possible to a randomly lighting up LED.

raspberry pi marble maze

You can also play solo with our Lights out game, where it’s you against four erratic lights eager to remain lit.

For games you can build on your computer with no need for any extra tech, Scratch games such as our button-smashing Olympic weightlifter and Hurdler projects are perfect — you can play them just using a keyboard and browser!

raspberry pi marble maze

And if you’d like to really get stuck into learning about game development, then you’re in luck! CoderDojo’s Make your own game book guides you through all the steps of building a game in JavaScript, from creating the world to designing characters.

Cover of CoderDojo Nano Make your own game

And because I just found this while searching for image content for today’s blog, here is a photo of Eben’s and Liz’s cat Mooncake with a Raspberry Pi on her head. Enjoy!

A cat with a Raspberry Pi pin on its head — raspberry pi marble maze

Ras-purry Pi?

The post Build a Flick-controlled marble maze appeared first on Raspberry Pi.