Tag Archives: mac os x

Your Hard Drive Crashed — Get Working Again Fast with Backblaze

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/how-to-recover-your-files-with-backblaze/

holding a hard drive and diagnostic tools
The worst thing for a computer user has happened. The hard drive on your computer crashed, or your computer is lost or completely unusable.

Fortunately, you’re a Backblaze customer with a current backup in the cloud. That’s great. The challenge is that you’ve got a presentation to make in just 48 hours and the document and materials you need for the presentation were on the hard drive that crashed.

Relax. Backblaze has your data (and your back). The question is, how do you get what you need to make that presentation deadline?

Here are some strategies you could use.

One — The first approach is to get back the presentation file and materials you need to meet your presentation deadline as quickly as possible. You can use another computer (maybe even your smartphone) to make that presentation.

Two — The second approach is to get your computer (or a new computer, if necessary) working again and restore all the files from your Backblaze backup.

Let’s start with Option One, which gets you back to work with just the files you need now as quickly as possible.

Option One — You’ve Got a Deadline and Just Need Your Files

Getting Back to Work Immediately

You want to get your computer working again as soon as possible, but perhaps your top priority is getting access to the files you need for your presentation. The computer can wait.

Find a Computer to Use

First of all. You’re going to need a computer to use. If you have another computer handy, you’re all set. If you don’t, you’re going to need one. Here are some ideas on where to find one:

  • Family and Friends
  • Work
  • Neighbors
  • Local library
  • Local school
  • Community or religious organization
  • Local computer shop
  • Online store

Laptop computer

If you have a smartphone that you can use to give your presentation or to print materials, that’s great. With the Backblaze app for iOS and Android, you can download files directly from your Backblaze account to your smartphone. You also have the option with your smartphone to email or share files from your Backblaze backup so you can use them elsewhere.

Laptop with smartphone

Download The File(s) You Need

Once you have the computer, you need to connect to your Backblaze backup through a web browser or the Backblaze smartphone app.

Backblaze Web Admin

Sign into your Backblaze account. You can download the files directly or use the share link to share files with yourself or someone else.

If you need step-by-step instructions on retrieving your files, see Restore the Files to the Drive section below. You also can find help at https://help.backblaze.com/hc/en-us/articles/217665888-How-to-Create-a-Restore-from-Your-Backblaze-Backup.

Smartphone App

If you have an iOS or Android smartphone, you can use the Backblaze app and retrieve the files you need. You then could view the file on your phone, use a smartphone app with the file, or email it to yourself or someone else.

Backblaze Smartphone app (iOS)

Backblaze Smartphone app (iOS)

Using one of the approaches above, you got your files back in time for your presentation. Way to go!

Now, the next step is to get the computer with the bad drive running again and restore all your files, or, if that computer is no longer usable, restore your Backblaze backup to a new computer.

Option Two — You Need a Working Computer Again

Getting the Computer with the Failed Drive Running Again (or a New Computer)

If the computer with the failed drive can’t be saved, then you’re going to need a new computer. A new computer likely will come with the operating system installed and ready to boot. If you’ve got a running computer and are ready to restore your files from Backblaze, you can skip forward to Restore the Files to the Drive.

If you need to replace the hard drive in your computer before you restore your files, you can continue reading.

Buy a New Hard Drive to Replace the Failed Drive

The hard drive is gone, so you’re going to need a new drive. If you have a computer or electronics store nearby, you could get one there. Another choice is to order a drive online and pay for one or two-day delivery. You have a few choices:

  1. Buy a hard drive of the same type and size you had
  2. Upgrade to a drive with more capacity
  3. Upgrade to an SSD. SSDs cost more but they are faster, more reliable, and less susceptible to jolts, magnetic fields, and other hazards that can affect a drive. Otherwise, they work the same as a hard disk drive (HDD) and most likely will work with the same connector.


Hard Disk Drive (HDD)Solid State Drive (SSD)

Hard Disk Drive (HDD)

Solid State Drive (SSD)


Be sure that the drive dimensions are compatible with where you’re going to install the drive in your computer, and the drive connector is compatible with your computer system (SATA, PCIe, etc.) Here’s some help.

Install the Drive

If you’re handy with computers, you can install the drive yourself. It’s not hard, and there are numerous videos on YouTube and elsewhere on how to do this. Just be sure to note how everything was connected so you can get everything connected and put back together correctly. Also, be sure that you discharge any static electricity from your body by touching something metallic before you handle anything inside the computer. If all this sounds like too much to handle, find a friend or a local computer store to help you.

Note:  If the drive that failed is a boot drive for your operating system (either Macintosh or Windows), you need to make sure that the drive is bootable and has the operating system files on it. You may need to reinstall from an operating system source disk or install files.

Restore the Files to the Drive

To start, you will need to sign in to the Backblaze website with your registered email address and password. Visit https://secure.backblaze.com/user_signin.htm to login.

Sign In to Your Backblaze Account

Selecting the Backup

Once logged in, you will be brought to the account Overview page. On this page, all of the computers registered for backup under your account are shown with some basic information about each. Select the backup from which you wish to restore data by using the appropriate “Restore” button.

Screenshot of Admin for Selecting the Type of Restore

Selecting the Type of Restore

Backblaze offers three different ways in which you can receive your restore data: downloadable ZIP file, USB flash drive, or USB hard drive. The downloadable ZIP restore option will create a ZIP file of the files you request that is made available for download for 7 days. ZIP restores do not have any additional cost and are a great option for individual files or small sets of data.

Depending on the speed of your internet connection to the Backblaze data center, downloadable restores may not always be the best option for restoring very large amounts of data. ZIP restores are limited to 500 GB per request and a maximum of 5 active requests can be submitted under a single account at any given time.

USB flash and hard drive restores are built with the data you request and then shipped to an address of your choosing via FedEx Overnight or FedEx Priority International. USB flash restores cost $99 and can contain up to 128 GB (110,000 MB of data) and USB hard drive restores cost $189 and can contain up to 4TB max (3,500,000 MB of data). Both include the cost of shipping.

You can return the ZIP drive within 30 days for a full refund with our Restore Return Refund Program, effectively making the process of restoring free, even with a shipped USB drive.

Screenshot of Admin for Selecting the Backup

Selecting Files for Restore

Using the left hand file viewer, navigate to the location of the files you wish to restore. You can use the disclosure triangles to see subfolders. Clicking on a folder name will display the folder’s files in the right hand file viewer. If you are attempting to restore files that have been deleted or are otherwise missing or files from a failed or disconnected secondary or external hard drive, you may need to change the time frame parameters.

Put checkmarks next to disks, files or folders you’d like to recover. Once you have selected the files and folders you wish to restore, select the “Continue with Restore” button above or below the file viewer. Backblaze will then build the restore via the option you select (ZIP or USB drive). You’ll receive an automated email notifying you when the ZIP restore has been built and is ready for download or when the USB restore drive ships.

If you are using the downloadable ZIP option, and the restore is over 2 GB, we highly recommend using the Backblaze Downloader for better speed and reliability. We have a guide on using the Backblaze Downloader for Mac OS X or for Windows.

For additional assistance, visit our help files at https://help.backblaze.com/hc/en-us/articles/217665888-How-to-Create-a-Restore-from-Your-Backblaze-Backup

Screenshot of Admin for Selecting Files for Restore

Extracting the ZIP

Recent versions of both macOS and Windows have built-in capability to extract files from a ZIP archive. If the built-in capabilities aren’t working for you, you can find additional utilities for Macintosh and Windows.

Reactivating your Backblaze Account

Now that you’ve got a working computer again, you’re going to need to reinstall Backblaze Backup (if it’s not on the system already) and connect with your existing account. Start by downloading and reinstalling Backblaze.

If you’ve restored the files from your Backblaze Backup to your new computer or drive, you don’t want to have to reupload the same files again to your Backblaze backup. To let Backblaze know that this computer is on the same account and has the same files, you need to use “Inherit Backup State.” See https://help.backblaze.com/hc/en-us/articles/217666358-Inherit-Backup-State

Screenshot of Admin for Inherit Backup State

That’s It

You should be all set, either with the files you needed for your presentation, or with a restored computer that is again ready to do productive work.

We hope your presentation wowed ’em.

If you have any additional questions on restoring from a Backblaze backup, please ask away in the comments. Also, be sure to check out our help resources at https://www.backblaze.com/help.html.

The post Your Hard Drive Crashed — Get Working Again Fast with Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    How to Prepare for AWS’s Move to Its Own Certificate Authority

    Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/

    AWS Certificate Manager image

     

    Update from March 28, 2018: We updated the Amazon Trust Services table by replacing an out-of-date value with a new value.


    Transport Layer Security (TLS, formerly called Secure Sockets Layer [SSL]) is essential for encrypting information that is exchanged on the internet. For example, Amazon.com uses TLS for all traffic on its website, and AWS uses it to secure calls to AWS services.

    An electronic document called a certificate verifies the identity of the server when creating such an encrypted connection. The certificate helps establish proof that your web browser is communicating securely with the website that you typed in your browser’s address field. Certificate Authorities, also known as CAs, issue certificates to specific domains. When a domain presents a certificate that is issued by a trusted CA, your browser or application knows it’s safe to make the connection.

    In January 2016, AWS launched AWS Certificate Manager (ACM), a service that lets you easily provision, manage, and deploy SSL/TLS certificates for use with AWS services. These certificates are available for no additional charge through Amazon’s own CA: Amazon Trust Services. For browsers and other applications to trust a certificate, the certificate’s issuer must be included in the browser’s trust store, which is a list of trusted CAs. If the issuing CA is not in the trust store, the browser will display an error message (see an example) and applications will show an application-specific error. To ensure the ubiquity of the Amazon Trust Services CA, AWS purchased the Starfield Services CA, a root found in most browsers and which has been valid since 2005. This means you shouldn’t have to take any action to use the certificates issued by Amazon Trust Services.

    AWS has been offering free certificates to AWS customers from the Amazon Trust Services CA. Now, AWS is in the process of moving certificates for services such as Amazon EC2 and Amazon DynamoDB to use certificates from Amazon Trust Services as well. Most software doesn’t need to be changed to handle this transition, but there are exceptions. In this blog post, I show you how to verify that you are prepared to use the Amazon Trust Services CA.

    How to tell if the Amazon Trust Services CAs are in your trust store

    The following table lists the Amazon Trust Services certificates. To verify that these certificates are in your browser’s trust store, click each Test URL in the following table to verify that it works for you. When a Test URL does not work, it displays an error similar to this example.

    Distinguished nameSHA-256 hash of subject public key informationTest URL
    CN=Amazon Root CA 1,O=Amazon,C=USfbe3018031f9586bcbf41727e417b7d1c45c2f47f93be372a17b96b50757d5a2Test URL
    CN=Amazon Root CA 2,O=Amazon,C=US7f4296fc5b6a4e3b35d3c369623e364ab1af381d8fa7121533c9d6c633ea2461Test URL
    CN=Amazon Root CA 3,O=Amazon,C=US36abc32656acfc645c61b71613c4bf21c787f5cabbee48348d58597803d7abc9Test URL
    CN=Amazon Root CA 4,O=Amazon,C=USf7ecded5c66047d28ed6466b543c40e0743abe81d109254dcf845d4c2c7853c5Test URL
    CN=Starfield Services Root Certificate Authority – G2,O=Starfield Technologies\, Inc.,L=Scottsdale,ST=Arizona,C=US2b071c59a0a0ae76b0eadb2bad23bad4580b69c3601b630c2eaf0613afa83f92Test URL
    Starfield Class 2 Certification Authority15f14ac45c9c7da233d3479164e8137fe35ee0f38ae858183f08410ea82ac4b4Not available*

    * Note: Amazon doesn’t own this root and doesn’t have a test URL for it. The certificate can be downloaded from here.

    You can calculate the SHA-256 hash of Subject Public Key Information as follows. With the PEM-encoded certificate stored in certificate.pem, run the following openssl commands:

    openssl x509 -in certificate.pem -noout -pubkey | openssl asn1parse -noout -inform pem -out certificate.key
    openssl dgst -sha256 certificate.key

    As an example, with the Starfield Class 2 Certification Authority self-signed cert in a PEM encoded file sf-class2-root.crt, you can use the following openssl commands:

    openssl x509 -in sf-class2-root.crt -noout -pubkey | openssl asn1parse -noout -inform pem -out sf-class2-root.key
    openssl dgst -sha256 sf-class2-root.key ~
    SHA256(sf-class2-root.key)= 15f14ac45c9c7da233d3479164e8137fe35ee0f38ae858183f08410ea82ac4b4

    What to do if the Amazon Trust Services CAs are not in your trust store

    If your tests of any of the Test URLs failed, you must update your trust store. The easiest way to update your trust store is to upgrade the operating system or browser that you are using.

    You will find the Amazon Trust Services CAs in the following operating systems (release dates are in parentheses):

    • Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista, Windows 7, Windows Server 2008, and newer versions
    • Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions
    • Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
    • Ubuntu 8.10
    • Debian 5.0
    • Amazon Linux (all versions)
    • Java 1.4.2_12, Java 5 update 2, and all newer versions, including Java 6, Java 7, and Java 8

    All modern browsers trust Amazon’s CAs. You can update the certificate bundle in your browser simply by updating your browser. You can find instructions for updating the following browsers on their respective websites:

    If your application is using a custom trust store, you must add the Amazon root CAs to your application’s trust store. The instructions for doing this vary based on the application or platform. Please refer to the documentation for the application or platform you are using.

    AWS SDKs and CLIs

    Most AWS SDKs and CLIs are not impacted by the transition to the Amazon Trust Services CA. If you are using a version of the Python AWS SDK or CLI released before October 29, 2013, you must upgrade. The .NET, Java, PHP, Go, JavaScript, and C++ SDKs and CLIs do not bundle any certificates, so their certificates come from the underlying operating system. The Ruby SDK has included at least one of the required CAs since June 10, 2015. Before that date, the Ruby V2 SDK did not bundle certificates.

    Certificate pinning

    If you are using a technique called certificate pinning to lock down the CAs you trust on a domain-by-domain basis, you must adjust your pinning to include the Amazon Trust Services CAs. Certificate pinning helps defend you from an attacker using misissued certificates to fool an application into creating a connection to a spoofed host (an illegitimate host masquerading as a legitimate host). The restriction to a specific, pinned certificate is made by checking that the certificate issued is the expected certificate. This is done by checking that the hash of the certificate public key received from the server matches the expected hash stored in the application. If the hashes do not match, the code stops the connection.

    AWS recommends against using certificate pinning because it introduces a potential availability risk. If the certificate to which you pin is replaced, your application will fail to connect. If your use case requires pinning, we recommend that you pin to a CA rather than to an individual certificate. If you are pinning to an Amazon Trust Services CA, you should pin to all CAs shown in the table earlier in this post.

    If you have comments about this post, submit them in the “Comments” section below. If you have questions about this post, start a new thread on the ACM forum.

    – Jonathan

    Backing Up Linux to Backblaze B2 with Duplicity and Restic

    Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/

    Linux users have a variety of options for handling data backup. The choices range from free and open-source programs to paid commercial tools, and include applications that are purely command-line based (CLI) and others that have a graphical interface (GUI), or both.

    If you take a look at our Backblaze B2 Cloud Storage Integrations page, you will see a number of offerings that enable you to back up your Linux desktops and servers to Backblaze B2. These include CloudBerry, Duplicity, Duplicacy, 45 Drives, GoodSync, HashBackup, QNAP, Restic, and Rclone, plus other choices for NAS and hybrid uses.

    In this post, we’ll discuss two popular command line and open-source programs: one older, Duplicity, and a new player, Restic.

    Old School vs. New School

    We’re highlighting Duplicity and Restic today because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

    Old School (Duplicity)

    In the old school model, data is written sequentially to the storage medium. Once a section of data is recorded, new data is written starting where that section of data ends. It’s not possible to go back and change the data that’s already been written.

    This old-school model has long been associated with the use of magnetic tape, a prime example of which is the LTO (Linear Tape-Open) standard. In this “write once” model, files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. As a Linux/Unix user, you undoubtedly are familiar with the TAR archive format, which is an acronym for Tape ARchive. TAR has been around since 1979 and was originally developed to write data to sequential I/O devices with no file system of their own.

    It is from the use of tape that we get the full backup/incremental backup approach to backups. A backup sequence beings with a full backup of data. Each incremental backup contains what’s been changed since the last full backup until the next full backup is made and the process starts over, filling more and more tape or whatever medium is being used.

    This is the model used by Duplicity: full and incremental backups. Duplicity backs up files by producing encrypted, digitally signed, versioned, TAR-format volumes and uploading them to a remote location, including Backblaze B2 Cloud Storage. Released under the terms of the GNU General Public License (GPL), Duplicity is free software.

    With Duplicity, the first archive is a complete (full) backup, and subsequent (incremental) backups only add differences from the latest full or incremental backup. Chains consisting of a full backup and a series of incremental backups can be recovered to the point in time that any of the incremental steps were taken. If any of the incremental backups are missing, then reconstructing a complete and current backup is much more difficult and sometimes impossible.

    Duplicity is available under many Unix-like operating systems (such as Linux, BSD, and Mac OS X) and ships with many popular Linux distributions including Ubuntu, Debian, and Fedora. It also can be used with Windows under Cygwin.

    We recently published a KB article on How to configure Backblaze B2 with Duplicity on Linux that demonstrates how to set up Duplicity with B2 and back up and restore a directory from Linux.

    New School (Restic)

    With the arrival of non-sequential storage medium, such as disk drives, and new ideas such as deduplication, comes the new school approach, which is used by Restic. Data can be written and changed anywhere on the storage medium. This efficiency comes largely through the use of deduplication. Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, greatly increasing storage efficiency and flexibility.

    Restic is a recently available multi-platform command line backup software program that is designed to be fast, efficient, and secure. Restic supports a variety of backends for storing backups, including a local server, SFTP server, HTTP Rest server, and a number of cloud storage providers, including Backblaze B2.

    Files are uploaded to a B2 bucket as deduplicated, encrypted chunks. Each time a backup runs, only changed data is backed up. On each backup run, a snapshot is created enabling restores to a specific date or time.

    Restic assumes that the storage location for repository is shared, so it always encrypts the backed up data. This is in addition to any encryption and security from the storage provider.

    Restic is open source and free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

    There’s a lot more you can do with Restic, including adding tags, mounting a repository locally, and scripting. To learn more, you can review the documentation at https://restic.readthedocs.io.

    Coincidentally with this blog post, we published a KB article, How to configure Backblaze B2 with Restic on Linux, in which we show how to set up Restic for use with B2 and how to back up and restore a home directory from Linux to B2.

    Which is Right for You?

    While Duplicity is a popular, widely-available, and useful program, many users of cloud storage solutions such as B2 are moving to new-school solutions like Restic that take better advantage of the non-sequential access capabilities and speed of modern storage media used by cloud storage providers.

    Tell us how you’re backing up Linux

    Please let us know in the comments what you’re using for Linux backups, and if you have experience using Duplicity, Restic, or other backup software with Backblaze B2.

    The post Backing Up Linux to Backblaze B2 with Duplicity and Restic appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    jSQL – Automatic SQL Injection Tool In Java

    Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/vEsd_Exo0S0/

    jSQL is an automatic SQL Injection tool written in Java, it’s lightweight and supports 23 kinds of database. It is free, open source and cross-platform (Windows, Linux, Mac OS X) and is easily available in Kali, Pentest Box, Parrot Security OS, ArchStrike or BlackArch Linux. Features Automatic injection of 23 kinds of databases: Access CockroachDB…

    Read the full post at darknet.org.uk

    Deploying an NGINX Reverse Proxy Sidecar Container on Amazon ECS

    Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/nginx-reverse-proxy-sidecar-container-on-amazon-ecs/

    Reverse proxies are a powerful software architecture primitive for fetching resources from a server on behalf of a client. They serve a number of purposes, from protecting servers from unwanted traffic to offloading some of the heavy lifting of HTTP traffic processing.

    This post explains the benefits of a reverse proxy, and explains how to use NGINX and Amazon EC2 Container Service (Amazon ECS) to easily implement and deploy a reverse proxy for your containerized application.

    Components

    NGINX is a high performance HTTP server that has achieved significant adoption because of its asynchronous event driven architecture. It can serve thousands of concurrent requests with a low memory footprint. This efficiency also makes it ideal as a reverse proxy.

    Amazon ECS is a highly scalable, high performance container management service that supports Docker containers. It allows you to run applications easily on a managed cluster of Amazon EC2 instances. Amazon ECS helps you get your application components running on instances according to a specified configuration. It also helps scale out these components across an entire fleet of instances.

    Sidecar containers are a common software pattern that has been embraced by engineering organizations. It’s a way to keep server side architecture easier to understand by building with smaller, modular containers that each serve a simple purpose. Just like an application can be powered by multiple microservices, each microservice can also be powered by multiple containers that work together. A sidecar container is simply a way to move part of the core responsibility of a service out into a containerized module that is deployed alongside a core application container.

    The following diagram shows how an NGINX reverse proxy sidecar container operates alongside an application server container:

    In this architecture, Amazon ECS has deployed two copies of an application stack that is made up of an NGINX reverse proxy side container and an application container. Web traffic from the public goes to an Application Load Balancer, which then distributes the traffic to one of the NGINX reverse proxy sidecars. The NGINX reverse proxy then forwards the request to the application server and returns its response to the client via the load balancer.

    Reverse proxy for security

    Security is one reason for using a reverse proxy in front of an application container. Any web server that serves resources to the public can expect to receive lots of unwanted traffic every day. Some of this traffic is relatively benign scans by researchers and tools, such as Shodan or nmap:

    [18/May/2017:15:10:10 +0000] "GET /YesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScann HTTP/1.1" 404 1389 - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36
    [18/May/2017:18:19:51 +0000] "GET /clientaccesspolicy.xml HTTP/1.1" 404 322 - Cloud mapping experiment. Contact [email protected]

    But other traffic is much more malicious. For example, here is what a web server sees while being scanned by the hacking tool ZmEu, which scans web servers trying to find PHPMyAdmin installations to exploit:

    [18/May/2017:16:27:39 +0000] "GET /mysqladmin/scripts/setup.php HTTP/1.1" 404 391 - ZmEu
    [18/May/2017:16:27:39 +0000] "GET /web/phpMyAdmin/scripts/setup.php HTTP/1.1" 404 394 - ZmEu
    [18/May/2017:16:27:39 +0000] "GET /xampp/phpmyadmin/scripts/setup.php HTTP/1.1" 404 396 - ZmEu
    [18/May/2017:16:27:40 +0000] "GET /apache-default/phpmyadmin/scripts/setup.php HTTP/1.1" 404 405 - ZmEu
    [18/May/2017:16:27:40 +0000] "GET /phpMyAdmin-2.10.0.0/scripts/setup.php HTTP/1.1" 404 397 - ZmEu
    [18/May/2017:16:27:40 +0000] "GET /mysql/scripts/setup.php HTTP/1.1" 404 386 - ZmEu
    [18/May/2017:16:27:41 +0000] "GET /admin/scripts/setup.php HTTP/1.1" 404 386 - ZmEu
    [18/May/2017:16:27:41 +0000] "GET /forum/phpmyadmin/scripts/setup.php HTTP/1.1" 404 396 - ZmEu
    [18/May/2017:16:27:41 +0000] "GET /typo3/phpmyadmin/scripts/setup.php HTTP/1.1" 404 396 - ZmEu
    [18/May/2017:16:27:42 +0000] "GET /phpMyAdmin-2.10.0.1/scripts/setup.php HTTP/1.1" 404 399 - ZmEu
    [18/May/2017:16:27:44 +0000] "GET /administrator/components/com_joommyadmin/phpmyadmin/scripts/setup.php HTTP/1.1" 404 418 - ZmEu
    [18/May/2017:18:34:45 +0000] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 390 - ZmEu
    [18/May/2017:16:27:45 +0000] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 401 - ZmEu

    In addition, servers can also end up receiving unwanted web traffic that is intended for another server. In a cloud environment, an application may end up reusing an IP address that was formerly connected to another service. It’s common for misconfigured or misbehaving DNS servers to send traffic intended for a different host to an IP address now connected to your server.

    It’s the responsibility of anyone running a web server to handle and reject potentially malicious traffic or unwanted traffic. Ideally, the web server can reject this traffic as early as possible, before it actually reaches the core application code. A reverse proxy is one way to provide this layer of protection for an application server. It can be configured to reject these requests before they reach the application server.

    Reverse proxy for performance

    Another advantage of using a reverse proxy such as NGINX is that it can be configured to offload some heavy lifting from your application container. For example, every HTTP server should support gzip. Whenever a client requests gzip encoding, the server compresses the response before sending it back to the client. This compression saves network bandwidth, which also improves speed for clients who now don’t have to wait as long for a response to fully download.

    NGINX can be configured to accept a plaintext response from your application container and gzip encode it before sending it down to the client. This allows your application container to focus 100% of its CPU allotment on running business logic, while NGINX handles the encoding with its efficient gzip implementation.

    An application may have security concerns that require SSL termination at the instance level instead of at the load balancer. NGINX can also be configured to terminate SSL before proxying the request to a local application container. Again, this also removes some CPU load from the application container, allowing it to focus on running business logic. It also gives you a cleaner way to patch any SSL vulnerabilities or update SSL certificates by updating the NGINX container without needing to change the application container.

    NGINX configuration

    Configuring NGINX for both traffic filtering and gzip encoding is shown below:

    http {
      # NGINX will handle gzip compression of responses from the app server
      gzip on;
      gzip_proxied any;
      gzip_types text/plain application/json;
      gzip_min_length 1000;
     
      server {
        listen 80;
     
        # NGINX will reject anything not matching /api
        location /api {
          # Reject requests with unsupported HTTP method
          if ($request_method !~ ^(GET|POST|HEAD|OPTIONS|PUT|DELETE)$) {
            return 405;
          }
     
          # Only requests matching the whitelist expectations will
          # get sent to the application server
          proxy_pass http://app:3000;
          proxy_http_version 1.1;
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection 'upgrade';
          proxy_set_header Host $host;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_cache_bypass $http_upgrade;
        }
      }
    }

    The above configuration only accepts traffic that matches the expression /api and has a recognized HTTP method. If the traffic matches, it is forwarded to a local application container accessible at the local hostname app. If the client requested gzip encoding, the plaintext response from that application container is gzip-encoded.

    Amazon ECS configuration

    Configuring ECS to run this NGINX container as a sidecar is also simple. ECS uses a core primitive called the task definition. Each task definition can include one or more containers, which can be linked to each other:

     {
      "containerDefinitions": [
         {
           "name": "nginx",
           "image": "<NGINX reverse proxy image URL here>",
           "memory": "256",
           "cpu": "256",
           "essential": true,
           "portMappings": [
             {
               "containerPort": "80",
               "protocol": "tcp"
             }
           ],
           "links": [
             "app"
           ]
         },
         {
           "name": "app",
           "image": "<app image URL here>",
           "memory": "256",
           "cpu": "256",
           "essential": true
         }
       ],
       "networkMode": "bridge",
       "family": "application-stack"
    }
    

    This task definition causes ECS to start both an NGINX container and an application container on the same instance. Then, the NGINX container is linked to the application container. This allows the NGINX container to send traffic to the application container using the hostname app.

    The NGINX container has a port mapping that exposes port 80 on a publically accessible port but the application container does not. This means that the application container is not directly addressable. The only way to send it traffic is to send traffic to the NGINX container, which filters that traffic down. It only forwards to the application container if the traffic passes the whitelisted rules.

    Conclusion

    Running a sidecar container such as NGINX can bring significant benefits by making it easier to provide protection for application containers. Sidecar containers also improve performance by freeing your application container from various CPU intensive tasks. Amazon ECS makes it easy to run sidecar containers, and automate their deployment across your cluster.

    To see the full code for this NGINX sidecar reference, or to try it out yourself, you can check out the open source NGINX reverse proxy reference architecture on GitHub.

    – Nathan
     @nathankpeck

    TheFatRat – Massive Exploitation Tool

    Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/A3ozOmH4BDE/

    TheFatRat is an easy-to-use Exploitation Tool that can help you to generate backdoors and post exploitation attacks like browser attack DLL files. This tool compiles malware with popular payloads and then the compiled malware can be executed on Windows, Linux, Mac OS X and Android. The malware that is created with this tool also has […]

    The…

    Read the full post at darknet.org.uk

    Test Your Streaming Data Solution with the New Amazon Kinesis Data Generator

    Post Syndicated from Allan MacInnis original https://aws.amazon.com/blogs/big-data/test-your-streaming-data-solution-with-the-new-amazon-kinesis-data-generator/

    When building a streaming data solution, most customers want to test it with data that is similar to their production data. Creating this data and streaming it to your solution can often be the most tedious task in testing the solution.

    Amazon Kinesis Streams and Amazon Kinesis Firehose enable you to continuously capture and store terabytes of data per hour from hundreds of thousands of sources. Amazon Kinesis Analytics gives you the ability to use standard SQL to analyze and aggregate this data in real-time. It’s easy to create an Amazon Kinesis stream or Firehose delivery stream with just a few clicks in the AWS Management Console (or a few commands using the AWS CLI or Amazon Kinesis API). However, to generate a continuous stream of test data, you must write a custom process or script that runs continuously, using the AWS SDK or CLI to send test records to Amazon Kinesis. Although this task is necessary to adequately test your solution, it means more complexity and longer development and testing times.

    Wouldn’t it be great if there were a user-friendly tool to generate test data and send it to Amazon Kinesis? Well, now there is—the Amazon Kinesis Data Generator (KDG).

    KDG overview

    The KDG simplifies the task of generating data and sending it to Amazon Kinesis. The tool provides a user-friendly UI that runs directly in your browser. With the KDG, you can do the following:

    • Create templates that represent records for your specific use cases
    • Populate the templates with fixed data or random data
    • Save the templates for future use
    • Continuously send thousands of records per second to your Amazon Kinesis stream or Firehose delivery stream

    The KDG is open source, and you can find the source code on the Amazon Kinesis Data Generator repo in GitHub. Because the tool is a collection of static HTML and JavaScript files that run directly in your browser, you can start using it immediately without downloading or cloning the project. It is enabled as a static site in GitHub, and we created a short URL to access it.

    To get started immediately, check it out at http://amzn.to/datagen.

    Using the KDG

    Getting started with the KDG requires only three short steps:

    1. Create an Amazon Cognito user in your AWS account (first-time only).
    2. Use this user’s credentials to log in to the KDG.
    3. Create a record template for your data.

    When you’ve completed these steps, you can then send data to Streams or Firehose.

    Create an Amazon Cognito user

    The KDG is a great example of a mobile application that uses Amazon Cognito for a user repository and user authentication, and the AWS JavaScript SDK to communicate with AWS services directly from your browser. For information about how to build your own JavaScript application that uses Amazon Cognito, see Use Amazon Cognito in your website for simple AWS authentication on the AWS Mobile Blog.

    Before you can start sending data to your Amazon Kinesis stream, you must create an Amazon Cognito user in your account who can write to Streams and Firehose. When you create the user, you create a username and password for that user. You use those credentials to sign in to the KDG. To simplify creating the Amazon Cognito user in your account, we created a Lambda function and a CloudFormation template. For more information about creating the Amazon Cognito user in your AWS account, see Configure Your AWS Account.

    Note:  It’s important that you use the URL provided by the output of the CloudFormation stack the first time that you access the KDG. This URL contains parameters needed by the KDG. The KDG stores the values of these parameters locally, so you can then access the tool using the short URL, http://amzn.to/datagen.

    Log in to the KDG

    After you create an Amazon Cognito user in your account, the next step is to log in to the KDG. To do this, provide the username and password that you created earlier.

    On the main page, you can configure your data templates and send data to an Amazon Kinesis stream or Firehose delivery stream.

    The basic configuration is simple enough. All fields on the page are required:

    • Region: Choose the AWS Region that contains the Amazon Kinesis stream or Firehose delivery stream to receive your streaming data.
    • Stream/firehose name: Choose the name of the stream or delivery stream to receive your streaming data.
    • Records per second: Enter the number of records to send to your stream or delivery stream each second.
    • Record template: Enter the raw data, or a template that represents your data structure, to be used for each record sent by the KDG. For information about creating templates for your data, see the “Creating Record Templates” section, later in this post.

    When you set the Records per second value, consider that the KDG isn’t intended to be a data producer for load-testing your application. However, it can easily send several thousand records per second from a single tab in your browser, which is plenty of data for most applications. In testing, the KDG has produced 80,000 records per second to a single Amazon Kinesis stream, but your mileage may vary. The maximum rate at which it produces records depends on your computer’s specs and the complexity of your record template.

    Ensure that your stream or delivery stream is scaled appropriately:

    • 1,000 records/second or 1 MB/second to an Amazon Kinesis stream
    • 5,000 records/second or 5 MB/second to a Firehose delivery stream

    Otherwise, Amazon Kinesis may reject records, and you won’t achieve your desired throughput. For more information about adding capacity to a stream by adding more shards, see Resharding a Stream. For information about increasing the capacity of a delivery stream, see Amazon Kinesis Firehose Limits.

    Create record templates

    The Record Template field is a free-text field where you can enter any text that represents a single streaming data record. You can create a single line of static data, so that each record sent to Amazon Kinesis is identical. Or, you can format the text as a template.

    In this case, the KDG substitutes portions of the template with fake or random data before sending the record. This lets you introduce randomness or variability in each record that is sent in your data stream. The KDG uses Faker.js, an open source library, to generate fake data. For more information, see the faker.js project page in GitHub. The easiest way to see how this works is to review an example.

    To simulate records being sent from a weather sensor Internet of Things (IoT) device, you want each record to be formatted in JSON. The following is an example of what a final record must look like:

    {
    	"sensorId": 40,
    	"currentTemperature": 76,
    	"status": "OK"
    } 

    For this use case, you want to simulate sending data from one of 50 sensors, so the sensorID field can be an integer between 1 and 50. The temperature value can range between 10 and 150, so the currentTemperature field should contain a value in this range. Finally, the status value can be one of three possible values: OK, FAIL, and WARN. The KDG template format uses moustache syntax (double curly-braces) to enclose items that should be replaced before the record is sent to Amazon Kinesis. To model the record, the template looks like this:

    {
        "sensorId": {{random.number(50)}},
        "currentTemperature": {{random.number(
            {
                "min":10,
                "max":150
            }
        )}},
        "status": "{{random.arrayElement(
            ["OK","FAIL","WARN"]
        )}}"
    }

    Take a look at one more example, simulating a stream of records that represent rows from an Apache access log. A single Apache access log entry might look like this:

    76.0.56.179 - - [29/Apr/2017:16:32:11 -05:00] "GET /wp-admin" 200 8233 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0 rv:6.0; CY) AppleWebKit/535.0.0 (KHTML, like Gecko) Version/4.0.3 Safari/535.0.0"

    The following example shows how to create a template for the Apache access log:

    {{internet.ip}} - - [{{date.now("DD/MMM/YYYY:HH:mm:ss Z")}}] "{{random.weightedArrayElement({"weights":[0.6,0.1,0.1,0.2],"data":["GET","POST","DELETE","PUT"]})}} {{random.arrayElement(["/list","/wp-content","/wp-admin","/explore","/search/tag/list","/app/main/posts","/posts/posts/explore"])}}" {{random.weightedArrayElement({"weights": [0.9,0.04,0.02,0.04], "data":["200","404","500","301"]})}} {{random.number(10000)}} "-" "{{internet.userAgent}}"
    

    For more information about creating your own templates, see the Record Template section of the KDG documentation.

    The KDG saves the templates that you create in your local browser storage. As long as you use the same browser on the same computer, you can reuse up to five templates.

    Summary

    Testing your streaming data solution has never been easier. Get started today by visiting the KDG hosted UI or its Amazon Kinesis Data Generator page in GitHub. The project is licensed under the Apache 2.0 license, so feel free to clone and modify it for your own use as necessary. And of course, please submit any issues or pull requests via GitHub.

    If you have any questions or suggestions, please add them below.

     


    About the Author

    Allan MacInnis is a Solutions Architect at Amazon Web Services. He works with our customers to help them build streaming data solutions using Amazon Kinesis. In his spare time, he enjoys mountain biking and spending time with his family.

     

     


    Related

    Scale Your Amazon Kinesis Stream Capacity with UpdateShardCount

     

     

    New – Web Access for Amazon WorkSpaces

    Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-web-access-for-amazon-workspaces/

    We launched WorkSpaces in late 2013 (Amazon WorkSpaces – Desktop Computing in the Cloud) and have been adding new features at a rapid clip. Here are some highlights from 2016:

    Today we are adding to this list with the addition of Amazon WorkSpaces Web Access. You can now access your WorkSpace from recent versions of Chrome or Firefox running on Windows, Mac OS X, or Linux. You can now be productive on heavily restricted networks and in situations where installing a WorkSpaces client is not an option. You don’t have to download or install anything, and you can use this from a public computer without leaving any private or cached data behind.

    To use Amazon WorkSpaces Web Access, simply visit the registration page using a supported browser and enter the registration code for your WorkSpace:

    Then log in with your user name and password:

    And here you go (yes, this is IE and Firefox running on WorkSpaces, displayed in Chrome):

    This feature is available for all new WorkSpaces and you can access it at no additional charge after your administrator enables it:

    Existing WorkSpaces must be rebuilt and custom images must be refreshed in order to take advantage of Web Access.

    Jeff;

     

    Zenmap – Official Cross-Platform Nmap GUI

    Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/VHgaFJ2oiuk/

    Zenmap is the official Nmap GUI. It is a multi-platform (Linux, Windows, Mac OS X, BSD, etc.) free and open source application which aims to make Nmap easy for beginners to use while providing advanced features for experienced Nmap users. No frontend can replace good old command-line Nmap. The nature of a frontend is that […]

    The post…

    Read the full post at darknet.org.uk

    Microsoft announces PowerShell for Linux and Open Source

    Post Syndicated from jake original http://lwn.net/Articles/697609/rss

    Microsoft has announced the release of its PowerShell automation and scripting platform under the MIT license, complete with a GitHub repository. “Last year we started down this path by contributing to a number of open source projects (e.g. OpenSSH) and open sourcing a number of our own components including DSC resources. We learned that working closely with the community, in the code and with our backlog and issues list, allowed us prioritize and drive the development much more responsively. We’ve always worked with the community but shifting to a fine-grain, tight, feedback loop with the code, energized the team and allowed us to focus on the things that had the most impact for our customers and partners. Now we are going big by making PowerShell itself an open source project and making it available on Mac OS X, Ubuntu, CentOS/RedHat and others in the future.

    Google Rapid Response (GRR ) – Remote Live Forensics For Incident Response

    Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/pqSSf3VI9-A/

    GRR Rapid Response is an incident response framework focused on remote live forensics. It based on client server architecture, so there’s an agent which is installed on target systems and a Python server infrastructure that can manage and communicate with the agents. There are agents for Windows, Linux and Mac OS X environments. Overview To…

    Read the full post at darknet.org.uk

    Mac OS X Ransomware KeRanger Is Linux Encoder Trojan

    Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/A3QDRb8kTfE/

    So there’s been a fair bit of noise this past week about the Mac OS X Ransomware, the first of its’ kind called KeRanger. It also happens to be the first popular Mac malware of any form for some time. It’s also a lesson to all the Apple fanbois that their OS is not impervious […]

    The post Mac OS X Ransomware KeRanger Is Linux Encoder Trojan…

    Read the full post at darknet.org.uk

    Don’t spoof the momentum

    Post Syndicated from Йовко Ламбрев original http://yovko.net/dont-spoof-the-momentum/

    Това е една невзрачна картинка:
    MAC clone interface
    Представлява поле в web-интерфейса за настройки на домашния ми WiFi рутер, където ако напиша произволна комбинация от вида xx:xx:xx:xx:xx:xx, където всяко xx е шестнадесетично число (такова при основа 16) и натисна с мишката бутончето до него, ще сменя MAC-адреса на рутера ми. Толкова е лесно!
    С това заклинание по-долу пък мога да си генерирам случаен MAC-адрес. Работи на Mac OS X, ще сработи с Linux и повечето UNIX-базирани неща. Изглежда странно за неизкушен наблюдател, но не е нужно да разбираш как работи. Ще сработи и с copy & paste. И веднага връща отговор. Колкото пъти го пусна все ще е различен и все ще върши работа.
    $ openssl rand -hex 6 | sed ‘s/(..)/1:/g; s/.$//’
    cf:91:34:2c:65:aa
    Сега мога да използвам полученото като напиша в терминала на Mac-а си:
    $ sudo ifconfig en0 ether cf:91:34:2c:65:aa
    Готово. Смених MAC-адреса на компютъра си с нов и напълно случаен. ОК – това не е точно смяна, но може да е в сила достатъчно постоянно или пък достатъчно кратко време – колкото ми е нужно. Нарича се spoofing, когато се опитваш да се представиш за нещо друго или някой друг. Простичко търсене в Google за spoof mac address или change mac address ще ви засипе с допълнителни идеи по темата.
    Това по повод предложения законопроект за промяна на ИК и как с MAC-адреса сме щели да осигурим да не гласуват повече от трима…
    Ясно е, че нетърпението е голямо. Ясно е, че обществото очаква отговор от политиците след референдума, но е добре този отговор и да е адекватен, защото дяволът е в детайлите. А детайлите, които търпят критика в предложения законопроект съвсем не се изчерпват с горното, но то вади очи и предизвиква турбуленции в главите на хора, които са се занимавали поне мъничко с мрежи, Интернет и комуникации.
    Не е ли време подобни чернови да минават на обсъждане в някакъв публичен формат – граждани, експерти – преди да са стигнали до Парламента? Почти вече живеем в Интернет, по света има добри опити за публично обществено online сглобяване на документи и предварителното им обсъждане. Темата е експертна, какво значение имат тук партийните оцветки?
    Очакванията на обществото (или поне на рационалната част от него) са нещата с електронното гласуване да се направят, ако не съвършено, то поне задоволително добре. Да се подменят тези очаквания с нахвърляни на коляно текстове не е добра идея. Да spoof-нем този импулс в надеждите и очакванията на хората е по-лесно, отколкото с MAC-адреса. Това ли искаме?