Tag Archives: drag and drop

Transition from Scratch to Python with FutureLearn

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/futurelearn-scratch-to-python/

With the launch of our first new free online course of 2018 — Scratch to Python: Moving from Block- to Text-based Programming — two weeks away, I thought this would be a great opportunity to introduce you to the ins and outs of the course content so you know what to expect.

FutureLearn: Moving from Scratch to Python

Learn how to apply the thinking and programming skills you’ve learnt in Scratch to text-based programming languages like Python.

Take the plunge into text-based programming

The idea for this course arose from our conversations with educators who had set up a Code Club in their schools. Most people start a club by teaching Scratch, a block-based programming language, because it allows learners to drag and drop blocks of pre-written code into a window to create a program. The blocks automatically snap together, making it easy to build fun and educational projects that don’t require much troubleshooting. You can do almost anything a beginner could wish for with Scratch, even physical computing to control LEDs, buzzers, buttons, motors, and more!

Scratch to Python FutureLearn Raspberry Pi

However, on our face-to-face training programme Picademy, educators told us that they were finding it hard to engage children who had outgrown Scratch and needed a new challenge. It was easy for me to imagine: a young learner, who once felt confident about programming using Scratch, is now confused by the alien, seemingly awkward interface of Python. What used to take them minutes in Scratch now takes them hours to code, and they start to lose interest — not a good result, I’m sure you’ll agree. I wanted to help educators to navigate this period in their learners’ development, and so I’ve written a course that shows you how to take the programming and thinking skills you and your learners have developed in Scratch, and apply them to Python.

Scratch to Python FutureLearn Raspberry Pi

Who is the course for?

Educators from all backgrounds who are working with secondary school-aged learners. It will also be interesting to anyone who has spent time working with Scratch and wants to understand how programming concepts translate between different languages.

“It was great fun, and I thought that the ideas and resources would be great to use with Year 7 classes.”
Sue Grey, Classroom Teacher

What is covered?

After showing you the similarities and differences of Scratch and Python, and how the skills learned using one can be applied to the other, we will look at turning more complex Scratch scripts into Python programs. Through creating a Mad Libs game and developing a username generator, you will see how programs can be simplified in a text-based language. We will give you our top tips for debugging Python code, and you’ll have the chance to share your ideas for introducing more complex programs to your students.

Scratch to Python FutureLearn Raspberry Pi

After that, we will look at different data types in Python and write a script to calculate how old you are in dog years. Finally, you’ll dive deeper into the possibilities of Python by installing and using external Python libraries to perform some amazing tasks.

By the end of the course, you’ll be able to:

  • Transfer programming and thinking skills from Scratch to Python
  • Use fundamental Python programming skills
  • Identify errors in your Python code based on error messages, and debug your scripts
  • Produce tools to support students’ transition from block-based to text-based programming
  • Understand the power of text-based programming and what you can create with it

Where can I sign up?

The free four-week course starts on 12 March 2018, and you can sign up now on FutureLearn. While you’re there, be sure to check out our other free courses, such as Prepare to Run a Code Club, Teaching Physical Computing with a Raspberry Pi and Python, and our second new course Build a Makerspace for Young People — more information on it will follow in tomorrow’s blog post.

The post Transition from Scratch to Python with FutureLearn appeared first on Raspberry Pi.

12 B2 Power Tips for New Users

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/newbie-cloud-storage-guide/

B2 Tips for Beginners
You probably know that B2 is Backblaze’s fast and economical general purpose cloud storage, but do you know everything that you can do with it?

If you’re a B2 newbie, here are some blazing power tips to help you get the most out of B2 Cloud Storage.

If you’re a B2 expert or a developer, stay tuned. We’ll be publishing power tips for you in the near future. Enter your email address using the Join button at the top of the page and you won’t miss any upcoming blog posts.
Backblaze logo

1    Drag and Drop Files to B2

Use Backblaze’s drag-and-drop web interface to store, restore, and share B2 files.

Backblaze logo

2    Share Files You Have in B2

You can designate a B2 bucket as private or public. If the bucket is public and you’d like to share a file with others, you can create and copy a Friendly URL and paste it into an email or message.

Backblaze logo

3    Use B2 Just Like Any Other Drive

Use B2 just as if it were a drive on your computer — drag and drop files and folders, save files to it — using one of a number of integrations that let you mount B2 as a volume in your Windows or Macintosh file system (Mountain Duck, ExpanDrive, odrive). Pick the files you want to save, drop them in a desktop folder, and they are automatically saved to B2.

Backblaze logo

4    Drag and Drop To and From B2 from the Desktop, Too

Use Cyberduck, a B2 integration partner, to drag-and-drop files to and from B2 right from the Windows or Macintosh desktop.

Backblaze logo

5    Determine the Speed of your Connection to B2

You can check the speed and latency of your internet connection between your location and Backblaze’s data centers, and see how much data you could theoretically transfer in a day, at https://www.backblaze.com/speedtest/.

Backblaze logo

6    No Matter What Type of Data you Have, B2 Can Handle It

You can transfer any type or amount of data to B2 from any device that can connect to the internet, including Windows, Macintosh, Linux, servers, mobile devices, external drives, and NAS.

Backblaze logo

7    Get Your Files from B2 by Mail

You have a choice of how to receive your data from B2. You can download data directly or request that your data be shipped to you via FedEx.

Backblaze logo

8    Back Up Your Backups to B2

You can automatically back up your Apple Time Machine backup or Windows backup to a NAS and then back that up to B2 to give you both local and cloud backups for a 3-2-1 backup solution.

Backblaze logo

9    Protect Your B2 Account with Two-Factor Verification

You can (and should) protect your Backblaze account with two-factor verification (such as using an app on your smartphone), and you can use backup codes and SMS verification in case you lose access to your smartphone.

Backblaze logo

10    Preview Photos Stored on B2 from the Web

Preview your photos as thumbnails (and optionally download individual photos) in common image formats (including jpg, png, img, tiff, and gif) with the B2 web interface.

Backblaze logo

11    B2 Has Group Management, Too

Backblaze Groups works for B2, too — just like Backblaze Personal Backup and Business Backup. You can manage billing, group membership, and control access using Group Management in your Backblaze account dashboard.

Backblaze logo

12    B2 Integrations Make B2 More Powerful and Useful

There are over 30+ software and hardware integrations that make B2 more powerful. You can visit our integrations page to find a solution that works for you.

Want to Learn More About B2?

You can find more information on B2 on our website and in our help pages.

The post 12 B2 Power Tips for New Users appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Jack – Drag & Drop Clickjacking Tool For PoCs

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/uMXdj1EvNhM/

Jack is a Drag and Drop web-based Clickjacking Tool for the assistance of development in PoCs made with static HTML and JavaScript. Jack is web based and requires either a web server to serve its HTML and JS content or can be run locally. Typically something like Apache will suffice but anything that is able […]

The post Jack – Drag…

Read the full post at darknet.org.uk

Backup and Restore Time Machine using Synology and the B2 Cloud

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/time-machine-synology-b2-backup-restore/

B2 Cloud Storage, Time Machine, and Synology NAS
Have you ever wished that you could have Time Machine, your Synology NAS, and B2 Cloud Storage work together to automatically backup your Mac locally and to the cloud? That would be cool. Of course, you’d also want to be able to restore your Time Machine backup from your Synology NAS or the B2 cloud. And while you’re wishing, it would be great if you could have an encrypted USB Hard Drive show up at your doorstep with your Time Machine backup. Stop wishing! You can do all that today. Here’s how.

Overview

Apple’s Time Machine app, included with every Mac, creates automatic backups of your Mac computer. Typically, these backups are stored on a local external hard drive. Time Machine backups can also be stored on other devices such as a Network Attached Storage (NAS) system on your network. If your computer crashes or you get a new computer, you can restore your data from the Time Machine backup.

We advocate a “3-2-1” backup strategy that combines local storage like a Time Machine backup with offsite backup to provide an additional layer of security and redundancy. That’s 3 copies of your data: 2 local (your “live” version and your Time Machine backup), and 1 offsite. If something happens to your computer or your NAS – if they’re stolen, or if some sort of disaster strikes – you can still count on your cloud backup to keep you safe.

You can use Backblaze to back up your computer to the cloud and use Time Machine to create a local backup. In fact, many of our customers do exactly that. But there’s another way to approach this that’s more efficient: Make a copy of the Time Machine backup and send it offsite automatically.

A Streamlined 3-2-1 Backup Plan

diagram of automatic backup of your Mac locally and to the cloud

The idea is simple: Have Time Machine store its backup on your Synology NAS device, then sync the Time Machine backup from the Synology NAS to Backblaze B2 Cloud Storage. Once this is set up, the 3-2-1 backup process occurs automatically and your files are stored locally and off-site.

We’ve prepared a guide titled “How to backup your Time Machine backup to Synology and B2” in the Backblaze Knowledge Base to help you with the setup of Time Machine, Synology, and Backblaze B2. Please read through the instructions before starting the actual installation.

Restoring Your Time Machine Backup

The greatest backup process in the world is of little value if you can’t restore your data. With your Time Machine backup now stored on your Synology NAS and in B2, you have multiple ways to restore your files.

Day-to-day Restores

From time to time you may need to restore a file or two from your local backup, in this case, your Time Machine backup stored on your Synology NAS. This works just like having your Time Machine backup stored on a locally connected external hard drive:

  • On the Mac menu bar (top right) locate and click on the Time Machine icon.
  • Select “Enter Time Machine”.
  • Locate the file or files you wish to restore.
  • Click “Restore” to restore the selected file(s).

The only thing to remember is that your Synology NAS device needs to be accessible via your network to access the Time Machine backup.

Full Restores

Most often you would do a full restore of your Time Machine backup if you are replacing your computer or the hard/SSD drive inside.

Method 1: Restore from the Synology NAS device

The most straight-forward method is to restore the Time Machine backup directly from the Synology NAS device. You can restore your entire Time Machine backup to your new or reformatted Mac by having Apple’s Migration Assistant app use the Time Machine backup stored on the Synology NAS as the restore source. The Migration Assistant app is included with your Mac.

Of course, in the case of a disaster or theft, the Synology NAS may suffer the same fate as your Mac. In that case, you’ll want to restore your Time Machine backup from Backblaze B2, here’s how.

Method 2: Restore a Time Machine Backup from B2 via a USB Hard Drive

The second method is to prepare a B2 snapshot of your Time Machine backup and then have the snapshot copied to a USB hard drive you purchase from Backblaze. Think of a snapshot as a container that holds a copy of the files you wish to download. Instead of downloading each file individually, you create a snapshot of the files and download one item, the snapshot. In this case, you create the snapshot of your Time Machine backup, and we copy the snapshot to the hard drive and FedEx it to you. You then use the USB Hard Drive as a restore source when using Migration Assistant.

Method 2: Restore a Time Machine backup from B2 via USB hard drive

We’ve prepared a guide titled, “How to restore your Time Machine backup from B2” in the Backblaze Knowledge Base to walk you through the process of restoring your Time Machine backup from Backblaze B2 using an encrypted USB Hard Drive.

Method 3: Restore a Time Machine Backup from B2 via Download

When using this method, give consideration to the size of the Time Machine backup. It is not uncommon for this file to be several hundred gigabytes or even a terabyte or two. Even with the reasonably fast network connection downloading such a large file can take a considerable amount of time.

Prepare a snapshot of your Time Machine backup from B2 and download it to your “new” Mac. After you “unzip” the file you can use Migration Assistant on your new Mac to restore the Time Machine backup using the unzipped file as the restore source.

Method 3: Restore a Time Machine backup from B2 via download

Summary

As we noted earlier, you can use Backblaze Computer Backup to backup your computer to the cloud and use Time Machine to create a local backup. That works fine, but if you are using a Synology NAS device in your environment, the 3-2-1 strategy discussed above gives you another option. In that case, all of the Time Machine backups in your home or office can reside on the Synology NAS. Then you don’t need an external drive to store the Time Machine backup for each computer and all of the Time Machine backups can sync automatically to Backblaze B2 Cloud Storage.

In summary, if you have a Mac, a Synology NAS, and a Backblaze B2 account you can have an automatic 3-2-1 Time Machine backup of the files on your computer. You don’t have to drag and drop files into backup folders, remember to hit the “backup now” button, or hoard backup external USB drives in your closet. Enjoy automatic, continuous backup, locally and in the cloud. 3-2-1 backup has never been so easy.

The post Backup and Restore Time Machine using Synology and the B2 Cloud appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Cloud’s Software: A Look Inside Backblaze

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/the-clouds-software-a-look-inside-backblaze/

When most of us think about “the cloud,” we have an abstract idea that it’s computers in a data center somewhere — racks of blinking lights and lots of loud fans. There’s truth to that. Have a look inside our datacenter to get an idea. But besides the impressive hardware — and the skilled techs needed to keep it running — there’s software involved. Let’s take a look at a few of the software tools that keep our operation working.

Our data center is populated with Storage Pods, the servers that hold the data you entrust to us if you’re a Backblaze customer or you use B2 Cloud Storage. Inside each Storage Pod are dozens of 3.5-inch spinning hard disk drives — the same kind you’ll find inside a desktop PC. Storage Pods are mounted on racks inside the data center. Those Storage Pods work together in Vaults.

Vault Software

The Vault software that keeps those Storage Pods humming is one the backbones of our operation. It’s what makes it possible for us to scale our services to meet your needs and with durability, scalability and fast performance.

The Vault software distributes data across 20 different Storage Pods, with the data spread evenly across all 20 pods. Drives in the same position inside each Storage Pod are grouped together in software in what we call a “tome.” When a file gets uploaded to Backblaze, it’s split into pieces we call “shards” and distributed across all 20 drives.

Each file is stored as 20 shards: 17 data shards and three parity shards. As the name implies, the data shards comprise the information in the files you upload to Backblaze. Parity shards add redundancy so that a file can be completely restored from a Vault even if some of the pieces are not available.

Because those shards are distributed across 20 Storage Pods in 20 cabinets, a Storage Pod can go down and the Vault will still operate unimpeded. An entire cabinet can lose power and the Vault will still work fine.

Files can be written to the Vault even if a Storage Pod is down with two parity shards to protect the data. Even in the extreme — and unlikely — case where three Storage Pods in a Vault are offline, the files in the vault are still available because they can be reconstructed from the 17 available pieces.

Reed-Solomon Erasure Coding

Erasure coding makes it possible to rebuild a data file even if parts of the original are lost. Having effective erasure coding is vital in a distributed environment like a Backblaze Vault. It helps us keep your data safe even when the hardware that the data is stored on needs to be serviced.

We use Reed-Solomon erasure encoding. It’s a proven technique used in Linux RAID systems, by Microsoft in its Azure cloud storage, and by Facebook too. The Backblaze Vault Architecture is capable of delivering 99.99999% annual durability thanks in part to our Reed-Solomon erasure coding implementation.

Here’s our own Brian Beach with an explanation of how Reed-Solomon encoding works:

We threw out the Linux RAID software we had been using prior to the implementation of the Vaults and wrote our own Reed-Solomon implementation from scratch. We’re very proud of it. So much so that we’ve released it as open source that you can use in your own projects, if you wish.

We developed our Reed-Solomon implementation as a Java library. Why? When we first started this project, we assumed that we would need to write it in C to make it run as fast as we needed. It turns out that modern Java virtual machines working on our servers are great, and just-in-time compilers produces code that runs pretty quick.

All the work we’ve done to build a reliable, scalable, affordable solution for storing data in a “cloud” led to the creation of B2 Cloud Storage. B2 lets you store your data in the cloud for a fraction of what you’d spend elsewhere — 1/4 the price of Amazon S3, for example.

Using Our Storage

Having over 300 Petabytes of data storage available isn’t very useful unless we can store data and reliably restore it too. We offer two ways to store data with Backblaze: via a client application or via direct access. Our client application, Backblaze Computer Backup, is installed on your Mac or Windows system and basically does everything related to automatically backing up your computer. We locate the files that are new or changed and back them up. We manage versions, deduplicate files, and more. The Backblaze app does all the work behind the scenes.

The other way to use our storage is via direct access. You can use a Web GUI, a Command Line Interface (CLI) or an Application Programming Interface (API). With any of these methods, you are in charge of what gets stored in the Backblaze cloud. This is what Backblaze B2 is all about. You can log into B2 and use the Web GUI to drag and drop files that are stored in the Backblaze cloud. You decide what gets added and deleted, and how many versions of a file you want to keep. Think of B2 as your very own bucket in the cloud where you can store your files.

We also have mobile apps for iOS and Android devices to help you view and share any backed up files you have on the go. You can download them, play back or view media files, and share them as you need.

We focused on creating a native, integrated experience for you when you use our software. We didn’t take a shortcut to create a Java app for the desktop. On the Mac our app is built using Xcode and on the PC it was built using C. The app is designed for lightweight, unobtrusive performance. If you do need to adjust its performance, we give you that ability. You have control over throttling the backup rate. You can even adjust the number of CPU threads dedicated to Backblaze, if you choose.

When we first released the software almost a decade ago we had no idea that we’d iterate it more than 1,000 times. That’s the threshold we reached late last year, however! We released version 4.3.0 in December. We’re still plugging away at it and have plans for the future, too.

Our Philosophy: Keep It Simple

“Keep It Simple” is the philosophy that underlies all of the technology that powers our hardware. It makes it possible for you to affordably, reliably back up your computers and store data in the cloud.

We’re not interested in creating elaborate, difficult-to-implement solutions or pricing schemes that confuse and confound you. Our backup service is unlimited and unthrottled for one low price. We offer cloud storage for 1/4th the competition. And we make it easy to access with desktop, mobile and web interfaces, command line tools and APIs.

Hopefully we’ve shed some light on the software that lets our cloud services operate. Have questions? Join the discussion and let us know.

The post The Cloud’s Software: A Look Inside Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

B2 for Beginners: Inside the B2 Web interface

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/b2-for-beginners-inside-the-b2-web-interface/

B2 for Beginners

B2 for Beginners: Inside the B2 Web interface

B2 Cloud Storage enables you to store data in the cloud at a fraction of what you’ll pay other services. For instance, we’re one-fourth of the price of Amazon’s S3. We’ve made it easy to access thanks to a web interface, API and command line interface. Let’s get to know the web interface a bit better, because it’s the easiest way to get around B2 and it’s a good way to get a handle on the fundamentals of B2 use.

Anyone with a Backblaze account can set up B2 access by visiting My Settings. Look for Enabled Products and check B2 Cloud Storage.

B2 is accessed the same way as your Backblaze Computer Backup. The sidebar on the left side of your My Account window shows you all the Backblaze services you use, including B2. Let’s go through the individual links under B2 Cloud Storage to get a sense of what they are and what they do.

Buckets

Data in B2 is stored in buckets. Think of a bucket as a top-level folder or directory. You can create as many buckets as you want. What’s more, you can put in as many files as you want. Buckets can contain files of any type or size.

Buckets

Third-party applications and services can integrate with B2, and many already do. The Buckets screen is where you can get your Account ID information and create an application key – a unique identifier your apps will use to securely connect to B2. If you’re using a third-party app that needs access to your bucket, such as a NAS backup app or a file sync tool, this is where you’ll find the info you need to connect. (We’ll have more info about how to backup your NAS to B2 very soon!)

The Buckets window lists the buckets you’ve created and provides basic information including creation date, ID, public or private type, lifecycle information, number of files, size and snapshots.

Click the Bucket Settings link to adjust each bucket’s individual settings. You can specify if files in the bucket are public or private. Private files can’t be shared, while public ones can be.

You can also tag your bucket with customized information encoded in JSON format. Custom info can contain letters, numbers, “-” and “_”.

Browse Files

Click the Upload/Download button to see a directory of each bucket. Alternately, click the Browse Files link on the left side of the B2 interface.

Browse Files

You can create a new subdirectory by clicking the New Folder button, or begin to upload files by clicking the Upload button. You can drag and drop files you’d like to upload and Backblaze will handle that for you. Alternately, clicking on the dialog box that appears will enable you to select the files on your computer you’d like to upload.

Info Button

Next to each individual file is an information button. Click it for details about the file, including name, location, kind, size and other details. You’ll also see a “Friendly URL” link. If the bucket is public and you’d like to share this file with others, you may copy that Friendly URL and paste it into an email or message to let people know where to find it.

Download

You can download the contents of your buckets by clicking the checkbox next to the filename and clicking the Download button. You can also delete files and create snapshots. Snapshots are helpful if you want to preserve copies of your files in their present state for some future download or recovery. You can also create a snapshot of the full bucket. If you have a large snapshot, you can order it as a hard drive instead of downloading it. We’ll get more into snapshots in a future blog post.

Lifecycle Settings

We recently introduced Lifecycle Settings to keep your buckets from getting cluttered with too many versions of files. Our web interface lets you manage these settings for each individual bucket.

Lifecycle Rules

By default, the bucket’s lifecycle setting is to keep all versions of files you upload. The web interface lets you adjust that so B2 only keeps the last file version, keeps the last file for a specific number of days, or keeps files based on your own custom rule. You can determine the file path, the number of days until the file is hidden, and the number of days until the file is deleted.

Lifecycle Rules

Reports

Backblaze updates your account daily with details on what’s happening with your B2 files. These reports are accessible through the B2 interface under the Reports tab. Clicking on reports will reveal an easy to understand visual charge showing you the average number of GB stored, total GB downloaded and total number of transactions for the month.

Reports

Look further down the page for a breakdown of monthly transactions by type, along with charts that help you track average GB stored, GB downloaded and count of average stored files for the month.

Caps and Alerts

One of our goals with B2 was to take the surprise out cloud storage fees. The B2 web GUI sports a Caps & Alerts section to help you control how much you spend on B2.

Caps & Alerts

This is where you can see – and limit – daily storage caps, daily downloads, and daily transactions. “Transactions” are interactions with your account like creating a new bucket, listing the contents of a bucket, or downloading a file.

You can make sure to send those alerts to your cell phone and email, so you’ll never be hit with an unwelcome surprise in the form of an unexpected bill. The first 10 GB of storage is free, with unlimited free uploads and 1 GB of free downloads each day.

Edit Caps

Click the Edit Caps button to enter dollar amount limits for storage, download bandwidth, Class B and Class C transactions separately (or specify No Cap if you don’t want to be encumbered). This way, you maintain control over how much you spend with B2.

And There’s More

That’s an overview of the B2 web GUI to help you get started using B2 Cloud Storage. If you’re more technical and are interested in connecting to B2 using our API instead, make sure to check out our B2 Starter Guide for a comprehensive overview of what’s under the hood.

Still have questions about the B2 web GUI, or ideas for how we can make it better? Fire away in the comments, we want to hear from you!

The post B2 for Beginners: Inside the B2 Web interface appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Grafana 4.0 Stable Release

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2016/12/12/grafana-4.0-stable-release/

Grafana v4.0.2 is stable is now available for download. After about 4 weeks of beta fixes and testing
are proud to announce that Grafana v4.0 stable is now released and production ready. This release contains a ton of minor
new features, fixes and improved UX. But on top of the usual new goodies is a core new feature: Alerting!
Read on below for a detailed description of what’s new in Grafana v4!

Alerting

Alerting is a really revolutionary feature for Grafana. It transforms Grafana from a
visualization tool into a truly mission critical monitoring tool. The alert rules are very easy to
configure using your existing graph panels and threshold levels can be set simply by dragging handles to
the right side of the graph. The rules will continually be evaluated by grafana-server and
notifications will be sent out when the rule conditions are met.

This feature has been worked on for over a year with many iterations and rewrites
just to make sure the foundations are really solid. We are really proud to finally release it!
Since the alerting execution is processed in the backend all data source plugins are not supported.
Right now Graphite, Prometheus, InfluxDB and OpenTSDB are supported. Elasticsearch is being worked
on but will not ready for v4 release.

Rules

The rule config allows you to specify a name, how often the rule should be evaluated and a series
of conditions that all need to be true for the alert to fire.

Currently the only condition type that exists is a Query condition that allows you to
specify a query letter, time range and an aggregation function. The letter refers to
a query you already have added in the Metrics tab. The result from the
query and the aggregation function is a single value that is then used in the threshold check.

We plan to add other condition types in the future, like Other Alert, where you can include the state
of another alert in your conditions, and Time Of Day.

Notifications

Alerting would not be very useful if there was no way to send notifications when rules trigger and change state. You
can setup notifications of different types. We currently have Slack, PagerDuty, Email and Webhook with more in the
pipe that will be added during beta period. The notifications can then be added to your alert rules.
If you have configured an external image store in the grafana.ini config file (s3 and webdav options available)
you can get very rich notifications with an image of the graph and the metric
values all included in the notification.

Annotations

Alert state changes are recorded in a new annotation store that is built into Grafana. This store
currently only supports storing annotations in Grafana’s own internal database (mysql, postgres or sqlite).
The Grafana annotation storage is currently only used for alert state changes but we hope to add the ability for users
to add graph comments in the form of annotations directly from within Grafana in a future release.

Alert List Panel

This new panel allows you to show alert rules or a history of alert rule state changes. You can filter based on states your
interested in. Very useful panel for overview style dashboards.

Ad-hoc filter variable

This is a new and very different type of template variable. It will allow you to create new key/value filters on the fly.
With autocomplete for both key and values. The filter condition will be automatically applied to all
queries that use that data source. This feature opens up more exploratory dashboards. In the gif animation to the right
you have a dashboard for Elasticsearch log data. It uses one query variable that allow you to quickly change how the data
is grouped, and an interval variable for controlling the granularity of the time buckets. What was missing
was a way to dynamically apply filters to the log query. With the Ad-Hoc Filters variable you can
dynamically add filters to any log property!

UX Improvements

We always try to bring some UX/UI refinements & polish in every release.

TV-mode & Kiosk mode

Grafana is so often used on wall mounted TVs that we figured a clean TV mode would be
really nice. In TV mode the top navbar, row & panel controls will all fade to transparent.

This happens automatically after one minute of user inactivity but can also be toggled manually
with the d v sequence shortcut. Any mouse movement or keyboard action will
restore navbar & controls.

Another feature is the kiosk mode. This can be enabled with d k
shortcut or by adding &kiosk to the URL when you load a dashboard.
In kiosk mode the navbar is completely hidden/removed from view.

New row menu & add panel experience

We spent a lot of time improving the dashboard building experience. Trying to make it both
more efficient and easier for beginners. After many good but not great experiments
with a build mode we eventually decided to just improve the green row menu and
continue work on a build mode for a future release.

The new row menu automatically slides out when you mouse over the edge of the row. You no longer need
to hover over the small green icon and the click it to expand the row menu.

There is some minor improvements to drag and drop behaviour. Now when dragging a panel from one row
to another you will insert the panel and Grafana will automatically make room for it.
When you drag a panel within a row you will simply reorder the panels.

If you look at the animation to the right you can see that you can drag and drop a new panel. This is not
required, you can also just click the panel type and it will be inserted at the end of the row
automatically. Dragging a new panel has an advantage in that you can insert a new panel where ever you want
not just at the end of the row.

We plan to further improve dashboard building in the future with a more rich grid & layout system.

Keyboard shortcuts

Grafana v4 introduces a number of really powerful keyboard shortcuts. You can now focus a panel
by hovering over it with your mouse. With a panel focused you can simple hit e to toggle panel
edit mode, or v to toggle fullscreen mode. p r removes the panel. p s opens share
modal.

Some nice navigation shortcuts are:

  • g h for go to home dashboard
  • s s open search with starred pre-selected
  • s t open search in tags list view

Upgrade & Breaking changes

There are no breaking changes. Old dashboards and features should work the same. Grafana-server will automatically upgrade it’s db
schema on restart. It’s advisable to do a backup of Grafana’s database before updating.

If your are using plugins make sure to update your plugins as some might not work perfectly v4.

You can update plugins using grafana-cli

grafana-cli plugins update-all

Changelog

Checkout the CHANGELOG.md file for a complete list
of new features, changes, and bug fixes.

Download

Head to v4 download page for download links & instructions.

Big thanks to all the Grafana users and devs out there who have helped with bug reports, feature
requests and pull requests!

Until next time, keep on graphing!
Torkel Ödegaard

Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/building-end-to-end-continuous-delivery-and-deployment-pipelines-in-aws-and-teamcity/

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

1. Continuous integration server setup using TeamCity

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a TeamCity server. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials. This stack provides an automated way to set up a TeamCity server based on the instructions here. You can download the template used for this setup from here.

The CloudFormation template does the following:

  1. Installs and configures the TeamCity server and its dependencies in Linux.
  2. Installs the AWS CodePipeline plugin for TeamCity.
  3. Installs a sample application with build configurations.
  4. Installs PHP meta-runners required to build the sample application.
  5. Redirects TeamCity port 8111 to 80.

Choose the AWS region where the TeamCity server will be hosted. For this demo, choose US East (N. Virginia).

Select a region

On the Select Template page, choose Next.

On the Specify Details page, do the following:

  1. In Stack name, enter a name for the stack. The name must be unique in the region in which you are creating the stack.
  2. In InstanceType, choose the instance type that best fits your requirements. The default value is t2.medium.

Note: The default instance type exceeds what’s included in the AWS Free Tier. If you use t2.medium, there will be charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH to the EC2 instance. SSH and HTTP access is limited to this IP address range.

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Specify Details

Choose Next.

Although it’s optional, on the Options page, type TeamCityServer for the instance name. This is the name used in the CloudFormation template for the stack. It’s a best practice to name your instance, because it makes it easier to identify or modify resources later on.

Choose Next.

On the Review page, choose Create button. It will take several minutes for AWS CloudFormation to create the resources for you.

Review

When the stack has been created, you will see a CREATE_COMPLETE message on the Overview tab in the Status column.

Events

You have now successfully created a TeamCity server. To access the server, on the EC2 Instance page, choose Public IP for the TeamCityServer instance.

Public DNS

On the TeamCity First Start page, choose Proceed.

TeamCity First Start

Although an internal database based on the HSQLDB database engine can be used for evaluation, TeamCity strongly recommends that you use an external database as a back-end TeamCity database in a production environment. An external database provides better performance and reliability. For more information, see the TeamCity documentation.

On the Database connection setup page, choose Proceed.

Database connection setup

The TeamCity server will start, which can take several minutes.

TeamCity is starting

Review and Accept the TeamCity License Agreement, and then choose Continue.

Next, create an Administrator account. Type a user name and password, and then choose Create Account.

You can navigate to the demo project from Projects in the top-left corner.

Projects

Note: You can create a project from a repository URL (the option used in this demo), or you can connect to your managed Git repositories, such as GitHub or BitBucket. The demo app used in this example can be found here.

We have already created a sample project configuration. Under Build, choose Edit Settings, and then review the settings.

Demo App

Choose Build Step: PHP – PHPUnit.

Build Step

The fields on the Build Step page are already configured.

Build Step

Choose Run to start the build.

Run Test

To review the tests that are run as part of the build, choose Tests.

Build

Build

You can view any build errors by choosing Build log from the same drop-down list.

Now that we have a successful build, we will use AWS CodeDeploy to set up a continuous deployment pipeline.

2. Continuous deployment using AWS CodeDeploy

Click here on this button launch-stack to launch an AWS CloudFormation stack that will use AWS CodeDeploy to set up a sample deployment pipeline. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

You can download the master template used for this setup from here. The template nests two CloudFormation templates to execute all dependent stacks cohesively.

  1. Template 1 creates a fleet of up to three EC2 instances (with a base operating system of Windows or Linux), associates an instance profile, and installs the AWS CodeDeploy agent. The CloudFormation template can be downloaded from here.
  2. Template 2 creates an AWS CodeDeploy deployment group and then installs a sample application. The CloudFormation template can be downloaded from here.

Choose the same AWS region you used when you created the TeamCity server (US East (N. Virginia)).

Note: The templates contain Amazon Machine Image (AMI) mappings for us-east-1, us-west-2, eu-west-1, and ap-southeast-2 only.

On the Select Template page, choose Next.

Picture17

On the Specify Details page, in Stack name, type a name for the stack. In the Parameters section, do the following:

  1. In AppName, you can use the default, or you can type a name of your choice. The name must be between 2 and 15 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.
  1. In DeploymentGroupName, you can use the default, or you type a name of your choice. The name must be between 2 and 25 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.

Picture18

  1. In InstanceType, choose the instance type that best fits the requirements of your application.
  2. In InstanceCount, type the number of EC2 instances (up to three) that will be part of the deployment group.
  3. For Operating System, choose Linux or Windows.
  4. Leave TagKey and TagValue at their defaults. AWS CodeDeploy will use this tag key and value to locate the instances during deployments. For information about Amazon EC2 instance tags, see Working with Tags Using the Console.Picture19
  5. In S3Bucket and S3Key, type the bucket name and S3 key where the application is located. The default points to a sample application that will be deployed to instances in the deployment group. Based on what you selected in the OperatingSystem field, use the following values.
    Linux:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Linux.zip
    Windows:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Windows.zip
  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH/RDP to the EC2 instance.

Picture20

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Follow the prompts, and then choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. Review the other settings, and then choose Create.

Picture21

It will take several minutes for CloudFormation to create all of the resources on your behalf. The nested stacks will be launched sequentially. You can view progress messages on the Events tab in the AWS CloudFormation console.

Picture22

You can see the newly created application and deployment groups in the AWS CodeDeploy console.

Picture23

To verify that your application was deployed successfully, navigate to the DNS address of one of the instances.

Picture24

Picture25

Now that we have successfully created a deployment pipeline, let’s integrate it with AWS CodePipeline.

3. Building a delivery pipeline using AWS CodePipeline

We will now create a delivery pipeline in AWS CodePipeline with the TeamCity AWS CodePipeline plugin.

  1. Using AWS CodePipeline, we will build a new pipeline with Source and Deploy stages.
  2. Create a custom action for the TeamCity Build stage.
  3. Create an AWS CodePipeline action trigger in TeamCity.
  4. Create a Build stage in the delivery pipeline for TeamCity.
  5. Publish the build artifact for deployment.

Step 1: Build a new pipeline with Source and Deploy stages using AWS CodePipeline.

In this step, we will create an Amazon S3 bucket to use as the artifact store for this pipeline.

  1. Install and configure the AWS CLI.
  1. Create an Amazon S3 bucket that will host the build artifact. Replace account-number with your AWS account number in the following steps.
    $ aws s3 mb s3://demo-app-build-account-number
  1. Enable bucket versioning
    $ aws s3api put-bucket-versioning --bucket demo-app-build-account-number --versioning-configuration Status=Enabled
  1. Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.
  • OSX/Linux:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip | aws s3 cp - s3://demo-app-build-account-number
  • Windows:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Windows_App.zip
    $ aws s3 cp ./Sample_Windows_App.zip s3://demo-app-account-number

Note: You can use AWS CloudFormation to perform these steps in an automated way. When you choose launch-stack, this template will be used. Use the following commands to extract the Amazon S3 bucket name, enable versioning on the bucket, and copy over the sample artifact.

$ export bucket-name ="$(aws cloudformation describe-stacks --stack-name “S3BucketStack” --output text --query 'Stacks[0].Outputs[?OutputKey==`S3BucketName`].OutputValue')"
$ aws s3api put-bucket-versioning --bucket $bucket-name --versioning-configuration Status=Enabled && wget https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip && aws s3 cp ./Sample_Linux_App.zip s3://$bucket-name

You can create a pipeline by using a CloudFormation stack or the AWS CodePipeline console.

Option 1: Use AWS CloudFormation to create a pipeline

We’re going to create a two-stage pipeline that uses a versioned Amazon S3 bucket and AWS CodeDeploy to release a sample application. (You can use an AWS CodeCommit repository or a GitHub repository as the source provider instead of Amazon S3.)

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a new delivery pipeline using the application and deployment group created in an earlier step. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

Choose the US East (N. Virginia) region, and then choose Next.

Leave the default options, and then choose Next.

Picture26

On the Options page, choose Next.

Picture27

Select the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. This will create the delivery pipeline in AWS CodePipeline.

Option 2: Use the AWS CodePipeline console to create a pipeline

On the Create pipeline page, in Pipeline name, type a name for your pipeline, and then choose Next step.
Picture28

Depending on where your source code is located, you can choose Amazon S3, AWS CodeCommit, or GitHub as your Source provider. The pipeline will be triggered automatically upon every check-in to your GitHub or AWS CodeCommit repository or when an artifact is published into the S3 bucket. In this example, we will be accessing the product binaries from an Amazon S3 bucket.

Choose Next step.Picture29

s3://demo-app-build-account-number/Sample_Linux_App.zip (or) Sample_Windows_App.zip

Note: AWS CodePipeline requires a versioned S3 bucket for source artifacts. Enable versioning for the S3 bucket where the source artifacts will be located.

On the Build page, choose No Build. We will update the build provider information later on.Picture31

For Deployment provider, choose CodeDeploy. For Application name and Deployment group, choose the application and deployment group we created in the deployment pipeline step, and then choose Next step.Picture32

An IAM role will provide the permissions required for AWS CodePipeline to perform the build actions and service calls.  If you already have a role you want to use with the pipeline, choose it on the AWS Service Role page. Otherwise, type a name for your role, and then choose Create role.  Review the predefined permissions, and then choose Allow. Then choose Next step.

 

For information about AWS CodePipeline access permissions, see the AWS CodePipeline Access Permissions Reference.

Picture34

Review your pipeline, and then choose Create pipeline

Picture35

This will trigger AWS CodePipeline to execute the Source and Beta steps. The source artifact will be deployed to the AWS CodeDeploy deployment groups.

Picture36

Now you can access the same DNS address of the AWS CodeDeploy instance to see the updated deployment. You will see the background color has changed to green and the page text has been updated.

Picture37

We have now successfully created a delivery pipeline with two stages and integrated the deployment with AWS CodeDeploy. Now let’s integrate the Build stage with TeamCity.

Step 2: Create a custom action for TeamCity Build stage

AWS CodePipeline includes a number of actions that help you configure build, test, and deployment resources for your automated release process. TeamCity is not included in the default actions, so we will create a custom action and then include it in our delivery pipeline. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

TeamCity’s custom action type (Build/Test categories) can be integrated with AWS CodePipeline. It’s similar to Jenkins and Solano CI custom actions. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

The TeamCity AWS CodePipeline plugin is already installed on the TeamCity server we set up earlier. To learn more about installing TeamCity plugins, see install the plugin. We will now create a custom action to integrate TeamCity with AWS CodePipeline using a custom-action JSON file.

Download this file locally: https://github.com/JetBrains/teamcity-aws-codepipeline-plugin/blob/master/custom-action.json

Open a terminal session (Linux, OS X, Unix) or command prompt (Windows) on a computer where you have installed the AWS CLI. For information about setting up the AWS CLI, see here.

Use the AWS CLI to run the aws codepipeline create-custom-action-type command, specifying the name of the JSON file you just created.

For example, to create a build custom action:

$ aws codepipeline create-custom-action-type --cli-input-json file://teamcity-custom-action.json

This should result in an output similar to this:

{
    "actionType": {
        "inputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "actionConfigurationProperties": [
            {
                "description": "The expected URL format is http[s]://host[:port]",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "TeamCityServerURL"
            },
            {
                "description": "Corresponding TeamCity build configuration external ID",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "BuildConfigurationID"
            },
            {
                "description": "Must be unique, match the corresponding field in the TeamCity build trigger settings, satisfy regular expression pattern: [a-zA-Z0-9_-]+] and have length <= 20",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": true,
                "name": "ActionID"
            }
        ],
        "outputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "id": {
            "category": "Build",
            "owner": "Custom",
            "version": "1",
            "provider": "TeamCity"
        },
        "settings": {
            "entityUrlTemplate": "{Config:TeamCityServerURL}/viewType.html?buildTypeId={Config:BuildConfigurationID}",
            "executionUrlTemplate": "{Config:TeamCityServerURL}/viewLog.html?buildId={ExternalExecutionId}&tab=buildResultsDiv"
        }
    }
}

Before you add the custom action to your delivery pipeline, make the following changes to the TeamCity build server. You can access the server by opening the Public IP of the TeamCityServer instance from the EC2 Instance page.

Picture38

In TeamCity, choose Projects. Under Build Configuration Settings, choose Version Control Settings. You need to remove the version control trigger here so that the TeamCity build server will be triggered during the Source stage in AWS CodePipeline. Choose Detach.

Picture39

Step 3: Create a new AWS CodePipeline action trigger in TeamCity

Now add a new AWS CodePipeline trigger in your build configuration. Choose Triggers, and then choose Add new trigger

Picture40

From the drop-down menu, choose AWS CodePipeline Action.

Picture41

 

In the AWS CodePipeline console, choose the region in which you created your delivery pipeline. Enter your access key credentials, and for Action ID, type a unique name. You will need this ID when you add a TeamCity Build stage to the pipeline.

Picture42

Step 4: Create a new Build stage in the delivery pipeline for TeamCity

Add a stage to the pipeline and name it Build.

Picture43

From the drop-down menu, choose Build. In Action name, type a name for the action. In Build provider, choose TeamCity, and then choose Add action.

Select TeamCity, click Add action

Picture44

For TeamCity Action Configuration, use the following:

TeamCityServerURL:  http://<Public DNS address of the TeamCity build server>[:port]

Picture45

BuildConfigurationID: In your TeamCity project, choose Build. You’ll find this ID (AwsDemoPhpSimpleApp_Build) under Build Configuration Settings.

Picture46

ActionID: In your TeamCity project, choose Build. You’ll find this ID under Build Configuration Settings. Choose Triggers, and then choose AWS CodePipeline Action.

Picture47

Next, choose input and output artifacts for the Build stage, and then choose Add action.

Picture48

We will now publish a new artifact to the Amazon S3 artifact bucket we created earlier, so we can see the deployment of a new app and its progress through the delivery pipeline. The demo app used in this artifact can be found here for Linux or here for Windows.

Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.

OSX/Linux:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/PhpArtifact.zip | aws s3 cp - s3://demo-app-build-account-number

Windows:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/WindowsArtifact.zip
$ aws s3 cp ./WindowsArtifact.zip s3://demo-app-account-number

From the AWS CodePipeline dashboard, under delivery-pipeline, choose Edit.Picture49

Edit Source stage by choosing the edit icon on the right.

Amazon S3 location:

Linux: s3://demo-app-account-number/PhpArtifact.zip

Windows: s3://demo-app-account-number/WindowsArtifact.zip

Under Output artifacts, make sure My App is displayed for Output artifact #1. This will be the input artifact for the Build stage.Picture50

The output artifact of the Build stage should be the input artifact of the Beta deployment stage (in this case, MyAppBuild).Picture51

Choose Update, and then choose Save pipeline changes. On the next page, choose Save and continue.

Step 5: Publish the build artifact for deploymentPicture52

Step (a)

In TeamCity, on the Build Steps page, for Runner type, choose Command Line, and then add the following custom script to copy the source artifact to the TeamCity build checkout directory.

Note: This step is required only if your AWS CodePipeline source provider is either AWS CodeCommit or Amazon S3. If your source provider is GitHub, this step is redundant, because the artifact is copied over automatically by the TeamCity AWS CodePipeline plugin.

In Step name, enter a name for the Command Line runner to easily distinguish the context of the step.

Syntax:

$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* % teamcity.build.checkoutDir%
$ unzip *.zip -d %teamcity.build.checkoutDir%
$ rm –rf %teamcity.build.checkoutDir%/*.zip

For Custom script, use the following commands:

cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %teamcity.build.checkoutDir%
unzip *.zip -d %teamcity.build.checkoutDir%
rm –rf %teamcity.build.checkoutDir%/*.zip

Picture53

Step (b):

For Runner type, choose Command Line runner type, and then add the following custom script to copy the build artifact to the output folder.

For Step name, enter a name for the Command Line runner.

Syntax:

$ mkdir -p %codepipeline.artifact.output.folder%/<CodePipeline-Name>/<build-output-artifact-name>/
$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* %codepipeline.artifact.output.folder%/<CodePipeline-Name/<build-output-artifact-name>/

For Custom script, use the following commands:

$ mkdir -p %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/
$ cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/

CopyToOutputFolderIn Build Steps, choose Reorder build steps to ensure the copying of the source artifact step is executed before the PHP – PHP Unit step.Picture54

Drag and drop Copy Source Artifact To Build Checkout Directory to make it the first build step, and then choose Apply.Picture55

Navigate to the AWS CodePipeline console. Choose the delivery pipeline, and then choose Release change. When prompted, choose Release.

Choose Release on the next prompt.

Picture56

The most recent change will run through the pipeline again. It might take a few moments before the status of the run is displayed in the pipeline view.

Here is what you’d see after AWS CodePipeline runs through all of the stages in the pipeline:Picture57

Let’s access one of the instances to see the new application deployment on the EC2 Instance page.Picture58

If your base operating system is Windows, accessing the public DNS address of one of the AWS CodeDeploy instances will result in the following page.

Windows: http://public-dns/Picture59

If your base operating system is Linux, when we access the public DNS address of one of the AWS CodeDeploy instances, we will see the following test page, which is the sample application.

Linux: http://public-dns/www/index.phpPicture60

Congratulations! You’ve created an end-to-end deployment and delivery pipeline ─ from source code, to build, to deployment ─ in a fully automated way.

Summary:

In this post, you learned how to build an end-to-end delivery and deployment pipeline on AWS. Specifically, you learned how to build an end-to-end, fully automated, continuous integration, continuous deployment, and delivery pipeline for your application, at scale, using AWS deployment and management services. You also learned how AWS CodePipeline can be easily extended through the use of custom triggers to integrate other services like TeamCity.

If you have questions or suggestions, please leave a comment below.

Succeeding MegaZeux

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/06/succeeding-megazeux/

In the beginning, there was ZZT. ZZT was a set of little shareware games for DOS that used VGA text mode for all the graphics, leading to such whimsical Rogue-like choices as ä for ammo pickups, Ω for lions, and for keys. It also came with an editor, including a small programming language for creating totally custom objects, which gave it the status of “game creation system” and a legacy that survives even today.

A little later on, there was MegaZeux. MegaZeux was something of a spiritual successor to ZZT, created by (as I understand it) someone well-known for her creative abuse of ZZT’s limitations. It added quite a few bells and whistles, most significantly a built-in font editor, which let aspiring developers draw simple sprites rather than rely on whatever they could scrounge from the DOS font.

And then…

And then, nothing. MegaZeux was updated for quite a while, and (unlike ZZT) has even been ported to SDL so it can actually run on modern operating systems. But there was never a third entry in this series, another engine worthy of calling these its predecessors.

I think that’s a shame.

The legacy

Plenty of people have never heard of ZZT, and far more have never heard of MegaZeux, so here’s a brief primer.

Both were released as “first-episode” shareware: they came with one game free, and you could pony up some cash to get the sequels. Those first games — Town of ZZT and Caverns of Zeux — have these moderately iconic opening scenes.

Town of ZZT
Caverns of Zeux

In the intervening decades, all of the sequels have been released online for free. If you want to try them yourself, ZZT 3.2 includes Town of ZZT and its sequels (but must be run in DOSBox), and you can get MegaZeux 2.84c, Caverns of Zeux, and the rest of the Zeux series separately.

Town of ZZT has you, the anonymous player, wandering around a loosely-themed “town” in search of five purple keys. It’s very much a game of its time: the setting is very vague but manages to stay distinct and memorable with very light touches; the puzzles range from trivial to downright cruel; the interface itself fights against you, as you can’t carry more than one purple key at a time; and the game can be softlocked in numerous ways, only some of which have advance warning in the form of “SAVE!!!” written carved directly into the environment.

The armory, and a gruff guardian
Darkness, which all players love
A few subtle hints

Caverns of Zeux is a little more cohesive, with a (thin) plot that unfolds as you progress through the game. Your objectives are slightly vaguer; you start out only knowing you’re trapped in a cave, and further information must be gleaned from NPCs. The gameplay is shaken up a couple times throughout — you discover spellbooks that give you new abilities, but later lose your primary weapon. The meat of the game is more about exploring and less about wacky Sokoban puzzles, though with many of the areas looking very similar and at least eight different-colored doors scattered throughout the game, the backtracking can get frustrating.

A charming little town
A chasm with several gem holders
The ice caves, or maybe caverns

Those are obviously a bit retro-looking now, but they’re not bad for VGA text made by individual hobbyists in 1991 and 1994. ZZT only even uses CGA’s eight bright colors. MegaZeux takes a bit more advantage of VGA capabilities to let you edit the palette as well as the font, but games are still restricted to only using 16 colors at one time.

The font ZZT was stuck with
MegaZeux's default character set

That’s great, but who cares?

A fair question!

ZZT and MegaZeux both occupy a unique game development niche. It’s the same niche as (Z)Doom, I think, and a niche that very few other tools fill.

I’ve mumbled about this on Twitter a couple times, and several people have suggested that the PICO-8 or Mario Maker might be in the same vein. I disagree wholeheartedly! ZZT, MegaZeux, and ZDoom all have two critical — and rare — things in common.

  1. You can crack open the editor, draw a box, and have a game. On the PICO-8, you are a lonely god in an empty void; you must invent physics from scratch before you can do anything else. ZZT, MegaZeux, and Doom all have enough built-in gameplay to make a variety of interesting levels right out of the gate. You can treat them as nothing more than level editors, and you’ll be hitting the ground running — no code required. And unlike most “no programming” GCSes, I mean that literally!

  2. If and when you get tired of only using the built-in objects, you can extend the engine. ZZT and MegaZeux have programmable actors built right in. Even vanilla Doom was popular enough to gain a third-party tool, DEHACKED, which could edit the compiled doom.exe to customize actor behavior. Mario Maker might be a nice and accessible environment for making games, but at the end of the day, the only thing you can make with it is Mario.

Both of these properties together make for a very smooth learning curve. You can open the editor and immediately make something, rather than needing to absorb a massive pile of upfront stuff before you can even get a sprite on the screen. Once you need to make small tweaks, you can dip your toes into robots — a custom pickup that gives you two keys at once is four lines of fairly self-explanatory code. Want an NPC with a dialogue tree? That’s a little more complex, but not much. And then suddenly you discover you’re doing programming. At the same time, you get rendering, movement, combat, collision, health, death, pickups, map transitions, menus, dialogs, saving/loading… all for free.

MegaZeux has one more nice property, the art learning curve. The built-in font is perfectly usable, but a world built from monochrome 8×14 tiles is a very comfortable place to dabble in sprite editing. You can add eyebrows to the built-in player character or slightly reshape keys to fit your own tastes, and the result will still fit the “art style” of the built-in assets. Want to try making your own sprites from scratch? Go ahead! It’s much easier to make something that looks nice when you don’t have to worry about color or line weight or proportions or any of that stuff.

It’s true that we’re in an “indie” “boom” right now, and more game-making tools are available than ever before. A determined game developer can already choose from among dozens (hundreds?) of editors and engines and frameworks and toolkits and whatnot. But the operative word there is “determined“. Not everyone has their heart set on this. The vast majority of people aren’t interested in devoting themselves to making games, so the most they’d want to do (at first) is dabble.

But programming is a strange and complex art, where dabbling can be surprisingly difficult. If you want to try out art or writing or music or cooking or dance or whatever, you can usually get started with some very simple tools and a one-word Google search. If you want to try out game development, it usually requires programming, which in turn requires a mountain of upfront context and tool choices and explanations and mysterious incantations and forty-minute YouTube videos of some guy droning on in monotone.

To me, the magic of MegaZeux is that anyone with five minutes to spare can sit down, plop some objects around, and have made a thing.

Deep dive

MegaZeux has a lot of hidden features. It also has a lot of glass walls. Is that a phrase? It should be a phrase. I mean that it’s easy to find yourself wanting to do something that seems common and obvious, yet find out quite abruptly that it’s structurally impossible.

I’m not leading towards a conclusion here, only thinking out loud. I want to explain what makes MegaZeux interesting, but also explain what makes MegaZeux limiting, but also speculate on what might improve on it. So, you know, something for everyone.

Big picture

MegaZeux is a top-down adventure-ish game engine. You can make platformers, if you fake your own gravity; you can make RPGs, if you want to build all the UI that implies.

MegaZeux games can only be played in, well, MegaZeux. Games that need instructions and multiple downloads to be played are fighting an uphill battle. It’s a simple engine that seems reasonable to deploy to the web, and I’ve heard of a couple attempts at either reimplementing the engine in JavaScript or throwing the whole shebang at emscripten, but none are yet viable.

People have somewhat higher expectations from both games and tools nowadays. But approachability is often at odds with flexibility. The more things you explicitly support, the more complicated and intimidating the interface — or the more hidden features you have to scour the manual to even find out about.

I’ve looked through the advertising screenshots of Game Maker and RPG Maker, and I’m amazed how many things are all over the place at any given time. It’s like trying to configure the old Mozilla Suite. Every new feature means a new checkbox somewhere, and eventually half of what new authors need to remember is the set of things they can safely ignore.

SLADE’s Doom map editor manages to be much simpler, but I’m not particularly happy with that, either — it’s not clever enough to save you from your mistakes (or necessarily detect them), and a lot of the jargon makes no sense unless you’ve already learned what it means somewhere else. Plus, making the most of ZDoom’s extra features tends to involve navigating ten different text files that all have different syntax and different rules.

MegaZeux has your world, some menus with objects in them, and spacebar to place something. The UI is still very DOS-era, but once you get past that, it’s pretty easy to build something.

How do you preserve that in something “modern”? I’m not sure. The only remotely-similar thing I can think of is Mario Maker, which cleverly hides a lot of customization options right in the world editor UI: placing wings on existing objects, dropping objects into blocks, feeding mushrooms to enemies to make them bigger. The downside is that Mario Maker has quite a lot of apocryphal knowledge that isn’t written down anywhere. (That’s not entirely a downside… but I could write a whole other post just exploring that one sentence.)

Graphics

Oh, no.

Graphics don’t make the game, but they’re a significant limiting factor for MegaZeux. Fixing everything to a grid means that even a projectile can only move one tile at a time. Only one character can be drawn per grid space, so objects can’t usefully be drawn on top of each other. Animations are difficult, since they eat into your 255-character budget, which limits real-time visual feedback. Most individual objects are a single tile — creating anything larger requires either a lot of manual work to keep all the parts together, or the use of multi-tile sprites which don’t quite exist on the board.

And yet! The same factors are what make MegaZeux very accessible. The tiles are small and simple enough that different art styles don’t really clash. Using a grid means simple games don’t have to think about collision detection at all. A monochromatic font can be palette-shifted, giving you colorful variants of the same objects for free.

How could you scale up the graphics but preserve the charm and approachability? Hmm.

I think the palette restrictions might be important here, but merely bumping from 2 to 8 colors isn’t quite right. The palette-shifting in MegaZeux always makes me think of keys first, and multi-colored keys make me think of Chip’s Challenge, where the key sprites were simple but lightly shaded.

All four Chips Challenge 2 keys

The game has to contain all four sprites separately. If you wanted to have a single sprite and get all of those keys by drawing it in different colors, you’d have to specify three colors per key: the base color, a lighter color, and a darker color. In other words, a ramp — a short gradient, chosen from a palette, that can represent the same color under different lighting. Here are some PICO-8 ramps, for example. What about a sprite system that drew sprites in terms of ramps rather than individual colors?

A pixel-art door in eight different color schemes

I whipped up this crappy example to illustrate. All of the doors are fundamentally the same image, and all of them use only eight colors: black, transparent, and two ramps of three colors each. The top-left door could be expressed as just “light gray” and “blue” — those colors would be expanded into ramps automatically, and black would remain black.

I don’t know how well this would work, but I’d love to see someone try it. It may not even be necessary to require all sprites be expressed this way — maybe you could import your own truecolor art if you wanted. ZDoom works kind of this way, though it’s more of a historical accident: it does support arbitrary PNGs, but vanilla Doom sprites use a custom format that’s in terms of a single global palette, and only that custom format can be subjected to palette manipulation.


Now, MegaZeux has the problem that small sprites make it difficult to draw bigger things like UI (or a non-microscopic player). The above sprites are 32×32 (scaled up 2× for ease of viewing here), which creates the opposite problem: you can’t possibly draw text or other smaller details with them.

I wonder what could be done here. I know that the original Pokémon games have a concept of “metatiles”: every map is defined in terms of 4×4 blocks of smaller tiles. You can see it pretty clearly on this map of Pallet Town. Each larger square is a metatile, and many of them repeat, even in areas that otherwise seem different.

Pallet Town from Pokémon Red, carved into blocks

I left the NPCs in because they highlight one of the things I found most surprising about this scheme. All the objects you interact with — NPCs, signs, doors, items, cuttable trees, even the player yourself — are 16×16 sprites. The map appears to be made out of 16×16 sprites, as well — but it’s really built from 8×8 tiles arranged into bigger 32×32 tiles.

This isn’t a particularly nice thing to expose directly to authors nowadays, but it demonstrates that there are other ways to compose tiles besides the obvious. Perhaps simple terrain like grass and dirt could be single large tiles, but you could also make a large tile by packing together several smaller tiles?

Text? Oh, text can just be a font.

Player status

MegaZeux has no HUD. To know how much health you have, you need to press Enter to bring up the pause menu, where your health is listed in a stack of other numbers like “gems” and “coins”. I say “menu”, but the pause menu is really a list of keyboard shortcuts, not something you can scroll through and choose items from.

MegaZeux's in-game menu, showing a list of keyboard shortcuts on the left and some stats on the right

To be fair, ZZT does reserve the right side of the screen for your stats, and it puts health at the top. I find myself scanning the MegaZeux pause menu for health every time, which seems a somewhat poor choice for the number that makes the game end when you run out of it.

Unlike most adventure games, your health is an integer starting at 100, not a small number of hearts or whatever. The only feedback when you take damage is a sound effect and an “Ouch!” at the bottom of the screen; you don’t flinch, recoil, or blink. Health pickups might give you any amount of health, you can pick up health beyond 100, and nothing on the screen tells you how much you got when you pick one up. Keeping track of your health in your head is, ah, difficult.

MegaZeux also has a system of multiple lives, but those are also just a number, and the default behavior on “death” is for your health to reset to 100 and absolutely nothing else happens. Walking into lava (which hurts for 100 at a time) will thus kill you and strip you of all your lives quite rapidly.

It is possible to manually create a HUD in MegaZeux using the “overlay” layer, a layer that gets drawn on top of everything else in the world. The downside is that you then can’t use the overlay for anything in-world, like roofs or buildings that can be walked behind. The overlay can be in multiple modes, one that’s attached to the viewport (like a HUD) and one that’s attached to the world (like a ceiling layer), so an obvious first step would be offering these as separate features.

An alternative is to use sprites, blocks of tiles created and drawn as a single unit by Robotic code. Sprites can be attached to the viewport and can even be drawn even above the overlay, though they aren’t exposed in the editor and must be created entirely manually. Promising, if clumsy and a bit non-obvious — I only just now found out about this possibility by glancing at an obscure section of the manual.

Another looming problem is that text is the same size as everything else — but you generally want a HUD to be prominent enough to glance at very quickly.

This makes me wonder how more advanced drawing could work in general. Instead of writing code by hand to populate and redraw your UI, could you just drag and drop some obvious components (like “value of this number”) onto a layer? Reuse the same concept for custom dialogs and menus, perhaps?

Inventory

MegaZeux has no inventory. Or, okay, it has sort of an inventory, but it’s all over the place.

The stuff in the pause menu is kind of like an inventory. It counts ammo, gems, coins, two kinds of bombs, and a variety of keys for you. The game also has multiple built-in objects that can give you specific numbers of gems and coins, which is neat, except that gems and coins don’t do actually anything. I think they increase your score, but until now I’d forgotten that MegaZeux has a score.

A developer can also define six named “counters” (i.e., integers) that will show up on the pause menu when nonzero. Caverns of Zeux uses this to show you how many rainbow gems you’ve discovered… but it’s just a number labeled RainbowGems, and there’s no way to see which ones you have.

Other than that, you’re on your own. All of the original Zeux games made use of an inventory, so this is a really weird oversight. Caverns of Zeux also had spellbooks, but you could only see which ones you’d found by trying to use them and seeing if it failed. Chronos Stasis has maybe a dozen items you can collect and no way to see which ones you have — though, to be fair, you use most of them in the same place. Forest of Ruin has a fairly standard inventory, but no way to view it. All three games have at least one usable item that they just bind to a key, which you’d better remember, because it’s game-specific and thus not listed in the general help file.

To be fair, this is preposterously flexible in a way that a general inventory might not be. But it’s also tedious for game authors and potentially confusing for players.

I don’t think an inventory would be particularly difficult to support, and MegaZeux is already halfway there. Most likely, the support is missing because it would need to be based on some concept of a custom object, and MegaZeux doesn’t have that either. I’ll get to that in a bit.

Creating new objects

MegaZeux allows you to create “robots”, objects that are controlled entirely through code you write in a simple programming language. You can copy and paste robots around as easily as any other object on the map. Cool.

What’s less cool is that robots can’t share code — when you place one, you make a separate copy of all of its code. If you create a small horde of custom monsters, then later want to make a change, you’ll have to copy/paste all the existing ones. Hope you don’t have them on other boards!

Some workarounds exist: you could make use of robots’ ability to copy themselves at runtime, and it’s possible to save or load code to/from an external file at runtime. More cumbersome than defining a template object and dropping it wherever you want, and definitely much less accessible.

This is really, really bad, because the only way to extend any of the builtin objects is to replace them with robots!

I’m a little spoiled by ZDoom, where you can create as many kinds of actor as you want. Actors can even inherit from one another, though the mechanism is a little limited and… idiosyncratic, so I wouldn’t call it beginner-friendly. It’s pretty nice to be able to define a type of monster or decoration and drop it all over a map, and I’m surprised such a thing doesn’t exist in MegaZeux, where boards and the viewport both tend to be fairly large.

This is the core of how ZDoom’s inventory works, too. I believe that inventories contain only kinds, not individual actors — that is, you can have 5 red keys, but the game only knows “5 of RedCard” rather than having five distinct RedCard objects. I’m sure part of the reason MegaZeux has no general-purpose inventory is that every custom object is completely distinct, with nothing fundamentally linking even identical copies of the same robot together.

Combat

By default, the player can shoot bullets by holding Space and pressing a direction. (Moving and shooting at the same time is… difficult.) Like everything else, bullets are fixed to the character grid, so they move an entire tile at a time.

Bullets can also destroy other projectiles, sometimes. A bullet hitting another bullet will annihilate both. A bullet hitting a fireball might either turn the fireball into a regular fire tile or simple be destroyed, depending on which animation frame the fireball is in when the bullet hits it. I didn’t know this until someone told me only a couple weeks ago; I’d always just thought it was random and arbitrary and frustrating. Seekers can’t be destroyed at all.

Most enemies charge directly at you; most are killed in one hit; most attack you by colliding with you; most are also destroyed by the collision.

The (built-in) combat is fairly primitive. It gives you something to do, but it’s not particularly satisfting, which is unfortunate for an adventure game engine.

Several factors conspire here. Graphical limitations make it difficult to give much visual feedback when something (including the player) takes damage or is destroyed. The motion of small, fast-moving objects on a fixed grid can be hard to keep track of. No inventory means weapons aren’t objects, either, so custom weapons need to be implemented separately in the global robot. No custom objects means new enemies and projectiles are difficult to create. No visual feedback means hitscan weapons are implausible.

I imagine some new and interesting directions would make themselves obvious in an engine with a higher resolution and custom objects.

Robotic

Robotic is MegaZeux’s programming language for defining the behavior of robots, and it’s one of the most interesting parts of the engine. A robot that acts like an item giving you two keys might look like this:

1
2
3
4
5
6
end
: "touch"
* "You found two keys!"
givekey c04
givekey c05
die as an item
MegaZeux's Robotic editor

Robotic has no blocks, loops, locals, or functions — though recent versions can fake functions by using special jumps. All you get is a fixed list of a few hundred commands. It’s effectively a form of bytecode assembly, with no manual assembling required.

And yet! For simple tasks, it works surprisingly well. Creating a state machine, as in the code above, is straightforward. end stops execution, since all robots start executing from their first line on start. : "touch" is a label (:"touch" is invalid syntax) — all external stimuli are received as jumps, and touch is a special label that a robot jumps to when the player pushes against it. * displays a message in the colorful status line at the bottom of the screen. givekey gives a key of a specific color — colors are a first-class argument type, complete with their own UI in the editor and an automatic preview of the particular colors. die as an item destroys the robot and simultaneously moves the player on top of it, as though the player had picked it up.

A couple other interesting quirks:

  • Most prepositions, articles, and other English glue words are semi-optional and shown in grey. The line die as an item above has as an greyed out, indicating that you could just type die item and MegaZeux would fill in the rest. You could also type die as item, die an item, or even die through item, because all of as, an, and through act like whitespace. Most commands sprinkle a few of these in to make themselves read a little more like English and clarify the order of arguments.

  • The same label may appear more than once. However, labels may be zapped, and a jump will always go to the first non-zapped occurrence of a label. This lets an author encode a robot’s state within the state of its own labels, obviating the need for state-tracking variables in many cases. (Zapping labels predates per-robot variables — “local counters” — which are unhelpfully named local through local32.)

    Of course, this can rapidly spiral out of control when state changes are more complicated or several labels start out zapped or different labels are zapped out of step with each other. Robotic offers no way to query how many of a label have been zapped and MegaZeux has no debugger for label states, so it’s not hard to lose track of what’s going on. Still, it’s an interesting extension atop a simple label-based state machine.

  • The built-in types often have some very handy shortcuts. For example, GO [dir] # tells a robot to move in some direction, some number of spaces. The directions you’d expect all work: NORTH, SOUTH, EAST, WEST, and synonyms like N and UP. But there are some extras like RANDNB to choose a random direction that doesn’t block the robot, or SEEK to move towards the player, or FLOW to continue moving in its current direction. Some of the extras only make sense in particular contexts, which complicates them a little, but the ability to tell an NPC to wander aimlessly with only RANDNB is incredible.

  • Robotic is more powerful than you might expect; it can change anything you can change in the editor, emulate the behavior of most other builtins, and make use of several features not exposed in the editor at all.

Nowadays, the obvious choice for an embedded language is Lua. It’d be much more flexible, to be sure, but it’d lose a little of the charm. One of the advantages of creating a totally custom language for a game is that you can add syntax for very common engine-specific features, like colors; in a general-purpose language, those are a little clumsier.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
function myrobot:ontouch(toucher)
    if not toucher.is_player then
        return false
    end
    world:showstatus("You found two keys!")
    toucher.inventory:add(Key{color=world.colors.RED})
    toucher.inventory:add(Key{color=world.colors.PURPLE})
    self:die()
    return true
end

Changing the rules

MegaZeux has a couple kinds of built-in objects that are difficult to replicate — and thus difficult to customize.

One is projectiles, mentioned earlier. Several variants exist, and a handful of specific behaviors can be toggled with board or world settings, but otherwise that’s all you get. It should be feasible to replicate them all with robots, but I suspect it’d involve a lot of subtleties.

Another is terrain. MegaZeux has a concept of a floor layer (though this is not explicitly exposed in the editor) and some floor tiles have different behavior. Ice is slippery; forest blocks almost everything but can be trampled by the player; lava hurts the player a lot; fire hurts the player and can spread, but burns out after a while. The trick with replicating these is that robots cannot be walked on. An alternative is to use sensors, which can be walked on and which can be controlled by a robot, but anything other than the player will push a sensor rather than stepping onto it. The only other approach I can think of is to keep track of all tiles that have a custom terrain, draw or animate them manually with custom floor tiles, and constantly check whether something’s standing there.

Last are powerups, which are really effects that rings or potions can give you. Some of them are special cases of effects that Robotic can do more generally, such as giving 10 health or changing all of one object into another. Some are completely custom engine stuff, like “Slow Time”, which makes everything on the board (even robots!) run at half speed. The latter are the ones you can’t easily emulate. What if you want to run everything at a quarter speed, for whatever reason? Well, you can’t, short of replacing everything with robots and doing a multiplication every time they wait.

ZDoom has a similar problem: it offers fixed sets of behaviors and powerups (which mostly derive from the commercial games it supports) and that’s it. You can manually script other stuff and go quite far, but some surprisingly simple ideas are very difficult to implement, just because the engine doesn’t offer the right kind of hook.

The tricky part of a generic engine is that a game creator will eventually want to change the rules, and they can only do that if the engine has rules for changing those rules. If the engine devs never thought of it, you’re out of luck.

Someone else please carry on this legacy

MegaZeux still sees development activity, but it’s very sporadic — the last release was in 2012. New features tend to be about making the impossible possible, rather than making the difficult easier. I think it’s safe to call MegaZeux finished, in the sense that a novel is finished.

I would really like to see something pick up its torch. It’s a very tricky problem, especially with the sprawling complexity of games, but surely it’s worth giving non-developers a way to try out the field.

I suppose if ZZT and MegaZeux and ZDoom have taught us anything, it’s that the best way to get started is to just write a game and give it very flexible editing tools. Maybe we should do that more. Maybe I’ll try to do it with Isaac’s Descent HD, and we’ll see how it turns out.

PiBakery – foolproof custom Raspbian setup

Post Syndicated from Lucy Hattersley original https://www.raspberrypi.org/blog/pibakery/

Everybody loves cake, right? Cakes have layers. Mmm…. cake! We’re sure you’re also love PiBakery, a brand new way to bake Raspberry Pi images, which makes creating a custom image a… piece of cake.

blocks-on-workspace

PiBakery was created by David Ferguson. He’s a talented 17-year-old whom we first met at the Big Birthday event we held to celebrate four years of Pi back in February. He showed Liz and Eben a work-in-progress version of PiBakery, and they’ve been raving about it ever since.

This crafty program enables users to mix together a customised version of Raspbian with additional ingredients, and you need absolutely no experience with computers to set up your custom image.

In PiBakery, you drag and drop blocks (just like Scratch) to add extra components. PiBakery then mixes the latest version of Raspbian with its additional sprinkles, and flashes the result directly to an SD card.

PiBakery_script

“The idea for PiBakery came about when I went to a Raspberry Pi event,” says David. “I needed to connect my Pi to the network there, but didn’t have a monitor, keyboard, and mouse. I needed a way of adding a network to my Raspberry Pi that didn’t require booting it up and manually connecting.”

“PiBakery solves this issue,” he explains. “You can simply drag across the blocks that you want to use with your Raspberry Pi, and the SD card will be created for you.”

“If you’ve already made an SD card using PiBakery, you can insert that card back into your computer, and keep editing the blocks to add additional software, configure new wireless networks, and alter different settings,” says David. “All without having to find a monitor, keyboard, and mouse.”

PiBakery is available for Mac and Windows, with a Linux version on the way. It can be downloaded directly from its website. As well as the scripts and block interface, it contains the whole Raspbian installation, so the initial download takes quite a while. However, it makes the process of building and flashing SD cards remarkably simple.

PiBakery_success

David has written a guide to creating customised SD cards with PiBakery. It’s a very easy program to use, and we followed his guide to quickly build a custom version of Raspbian that connected straight to our local wireless network. Guess what: it worked first time.

Behind the scenes, PiBakery creates a set of scripts that run when the Raspberry Pi is powered on (either just the first time, or every time it is powered). These scripts can be used to set up and connect to a WiFi network, and activate SSH.

Other options include installing Apache, changing the user password, and running Python or command line scripts.

The user controls which scripts are used with the block-based interface. You drag and drop the tasks you want the Raspberry Pi to perform when it’s powered up. Piece of cake.

We love PiBakery, and cake. Did we mention cake?

 

The post PiBakery – foolproof custom Raspbian setup appeared first on Raspberry Pi.