Security updates have been issued by Arch Linux (strongswan, wireshark-cli, wireshark-common, wireshark-gtk, and wireshark-qt), CentOS (libvirt, procps-ng, and thunderbird), Debian (apache2, git, and qemu), Gentoo (beep, git, and procps), Mageia (mariadb, microcode, python, virtualbox, and webkit2), openSUSE (ceph, pdns, and perl-DBD-mysql), Red Hat (kernel), SUSE (HA kernel modules, libmikmod, ntp, and tiff), and Ubuntu (nvidia-graphics-drivers-384).
Security updates have been issued by Debian (batik, cups, gitlab, ming, and xdg-utils), Fedora (dpdk, firefox, glibc, nodejs-deep-extend, strongswan, thunderbird, thunderbird-enigmail, wavpack, xdg-utils, and xen), Gentoo (ntp, rkhunter, and zsh), openSUSE (Chromium, GraphicsMagick, jasper, opencv, pdns, and wireshark), SUSE (jasper, java-1_7_1-ibm, krb5, libmodplug, and openstack-nova), and Ubuntu (thunderbird).
Security updates have been issued by Arch Linux (bind, libofx, and thunderbird), Debian (thunderbird, xdg-utils, and xen), Fedora (procps-ng), Mageia (gnupg2, mbedtls, pdns, and pdns-recursor), openSUSE (bash, GraphicsMagick, icu, and kernel), Oracle (thunderbird), Red Hat (java-1.7.1-ibm, java-1.8.0-ibm, and thunderbird), Scientific Linux (thunderbird), and Ubuntu (curl).
Security updates have been issued by Arch Linux (curl and zathura-pdf-mupdf), Debian (libmad and vlc), openSUSE (enigmail), Red Hat (collectd, Red Hat OpenStack Platform director, and sensu), and SUSE (firefox, ghostscript, and mysql).
Security updates have been issued by Arch Linux (libmupdf, mupdf, mupdf-gl, and mupdf-tools), Debian (firebird2.5, firefox-esr, and wget), Fedora (ckeditor, drupal7, firefox, kubernetes, papi, perl-Dancer2, and quassel), openSUSE (cairo, firefox, ImageMagick, libapr1, nodejs6, php7, and tiff), Red Hat (qemu-kvm-rhev), Slackware (mariadb), SUSE (xen), and Ubuntu (openjdk-8).
Post Syndicated from Kapil Shardha original https://aws.amazon.com/blogs/big-data/use-aws-glue-to-run-etl-jobs-against-non-native-jdbc-data-sources/
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easier to prepare and load your data for analytics. You can create and run an ETL job with a few clicks on the AWS Management Console. Just point AWS Glue to your data store. AWS Glue discovers your data and stores the associated metadata (for example, a table definition and schema) in the AWS Glue Data Catalog.
AWS Glue has native connectors to data sources using JDBC drivers, either on AWS or elsewhere, as long as there is IP connectivity. In this post, we demonstrate how to connect to data sources that are not natively supported in AWS Glue today. We walk through connecting to and running ETL jobs against two such data sources, IBM DB2 and SAP Sybase. However, you can use the same process with any other JDBC-accessible database.
AWS Glue data sources
AWS Glue natively supports the following data stores by using the JDBC protocol:
- Publicly accessible databases
- Amazon Aurora
- Microsoft SQL Server
For more information, see Adding a Connection to Your Data Store in the AWS Glue Developer Guide.
One of the fastest growing architectures deployed on AWS is the data lake. The ETL processes that are used to ingest, clean, transform, and structure data are critically important for this architecture. Having the flexibility to interoperate with a broader range of database engines allows for a quicker adoption of the data lake architecture.
For data sources that AWS Glue doesn’t natively support, such as IBM DB2, Pivotal Greenplum, SAP Sybase, or any other relational database management system (RDBMS), you can import custom database connectors from Amazon S3 into AWS Glue jobs. In this case, the connection to the data source must be made from the AWS Glue script to extract the data, rather than using AWS Glue connections. To learn more, see Providing Your Own Custom Scripts in the AWS Glue Developer Guide.
Setting up an ETL job for an IBM DB2 data source
The first example demonstrates how to connect the AWS Glue ETL job to an IBM DB2 instance, transform the data from the source, and store it in Apache Parquet format in Amazon S3. To successfully create the ETL job using an external JDBC driver, you must define the following:
- The S3 location of the job script
- The S3 location of the temporary directory
- The S3 location of the JDBC driver
- The S3 location of the Parquet data (output)
- The IAM role for the job
By default, AWS Glue suggests bucket names for the scripts and the temporary directory using the following format:
For JDBC drivers, you can create a similar location:
Also for the Parquet data (output), you can create a similar location:
Keep in mind that having the AWS Glue job and S3 buckets in the same AWS Region helps save on cross-Region data transfer fees. For this post, we will work in the US East (Ohio) Region (us-east-2).
Creating the IAM role
The next step is to set up the IAM role that the ETL job will use:
- Sign in to the AWS Management Console, and search for IAM:
- On the IAM console, choose Roles in the left navigation pane.
- Choose Create role. The role type of trusted entity must be an AWS service, specifically AWS Glue.
- Choose Next: Permissions.
- Search for the AWSGlueServiceRole policy, and select it.
- Search again, now for the SecretsManagerReadWrite This policy allows the AWS Glue job to access database credentials that are stored in AWS Secrets Manager.
CAUTION: This policy is open and is being used for testing purposes only. You should create a custom policy to narrow the access just to the secrets that you want to use in the ETL job.
- Select this policy, and choose Next: Review.
- Give your role a name, for example, GluePermissions, and confirm that both policies were selected.
- Choose Create role.
Now that you have created the IAM role, it’s time to upload the JDBC driver to the defined location in Amazon S3. For this example, we will use the DB2 driver, which is available on the IBM Support site.
Storing database credentials
It is a best practice to store database credentials in a safe store. In this case, we use AWS Secrets Manager to securely store credentials. Follow these steps to create those credentials:
- Open the console, and search for Secrets Manager.
- In the AWS Secrets Manager console, choose Store a new secret.
- Under Select a secret type, choose Other type of secrets.
- In the Secret key/value, set one row for each of the following parameters:
- db_url (for example, jdbc:db2://10.10.12.12:50000/SAMPLE)
- driver_name (ibm.db2.jcc.DB2Driver)
- output_bucket: (for example, aws-glue-data-output-1234567890-us-east-2/User)
- Choose Next.
- For Secret name, use DB2_Database_Connection_Info.
- Choose Next.
- Keep the Disable automatic rotation check box selected.
- Choose Next.
- Choose Store.
Adding a job in AWS Glue
The next step is to author the AWS Glue job, following these steps:
- In the AWS Management Console, search for AWS Glue.
- In the navigation pane on the left, choose Jobs under the ETL
- Choose Add job.
- Fill in the basic Job properties:
- Give the job a name (for example, db2-job).
- Choose the IAM role that you created previously (GluePermissions).
- For This job runs, choose A new script to be authored by you.
- For ETL language, choose Python.
- In the Script libraries and job parameters section, choose the location of your JDBC driver for Dependent jars path.
- Choose Next.
- On the Connections page, choose Next
- On the summary page, choose Save job and edit script. This creates the job and opens the script editor.
In the editor, replace the existing code with the following script. Important: Line 47 of the script corresponds to the mapping of the fields in the source table to the destination, dropping of the null fields to save space in the Parquet destination, and finally writing to Amazon S3 in Parquet format.
- Choose Save.
- Choose the black X on the right side of the screen to close the editor.
Running the ETL job
Now that you have created the job, the next step is to execute it as follows:
- On the Jobs page, select your new job. On the Action menu, choose Run job, and confirm that you want to run the job. Wait a few moments as it finishes the execution.
- After the job shows as Succeeded, choose Logs to read the output of the job.
- In the output of the job, you will find the result of executing the df.printSchema() and the message with the df.count().
Also, if you go to your output bucket in S3, you will find the Parquet result of the ETL job.
Using AWS Glue, you have created an ETL job that connects to an existing database using an external JDBC driver. It enables you to execute any transformation that you need.
Setting up an ETL job for an SAP Sybase data source
In this section, we describe how to create an AWS Glue ETL job against an SAP Sybase data source. The process mentioned in the previous section works for a Sybase data source with a few changes required in the job:
- While creating the job, choose the correct jar for the JDBC dependency.
- In the script, change the reference to the secret to be used from AWS Secrets Manager:
And change the mapping of the fields in line 47, as follows:
After you successfully execute the new ETL job, the output contains the same type of information that was generated with the DB2 data source.
Note that each of these JDBC drivers has its own nuances and different licensing terms that you should be aware of before using them.
Maximizing JDBC read parallelism
Something to keep in mind while working with big data sources is the memory consumption. In some cases, “Out of Memory” errors are generated when all the data is read into a single executor. One approach to optimize this is to rely on the parallelism on read that you can implement with Apache Spark and AWS Glue. To learn more, see the Apache Spark SQL module.
You can use the following options:
- partitionColumn: The name of an integer column that is used for partitioning.
- lowerBound: The minimum value of partitionColumn that is used to decide partition stride.
- upperBound: The maximum value of partitionColumn that is used to decide partition stride.
- numPartitions: The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions used to split the partitionColumn When unset, this defaults to SparkContext.defaultParallelism.
- Those options specify the parallelism of the table read. lowerBound and upperBound decide the partition stride, but they don’t filter the rows in the table. Therefore, Spark partitions and returns all rows in the table. For example:
It’s important to be careful with the number of partitions because too many partitions could also result in Spark crashing your external database systems.
Using the process described in this post, you can connect to and run AWS Glue ETL jobs against any data source that can be reached using a JDBC driver. This includes new generations of common analytical databases like Greenplum and others.
You can improve the query efficiency of these datasets by using partitioning and pushdown predicates. For more information, see Managing Partitions for ETL Output in AWS Glue. This technique opens the door to moving data and feeding data lakes in hybrid environments.
If you found this post useful, be sure to check out Work with partitioned data in AWS Glue.
About the Authors
Kapil Shardha is a Technical Account Manager and supports enterprise customers with their AWS adoption. He has background in infrastructure automation and DevOps.
William Torrealba is an AWS Solutions Architect supporting customers with their AWS adoption. He has background in Application Development, High Available Distributed Systems, Automation, and DevOps.
Security updates have been issued by Debian (libdatetime-timezone-perl, libmad, lucene-solr, tzdata, and wordpress), Fedora (drupal7, scummvm, scummvm-tools, and zsh), Mageia (boost, ghostscript, gsoap, java-1.8.0-openjdk, links, and php), openSUSE (pam_kwallet), and Slackware (python).
Red Hat has announced
the general availability of Red Hat Enterprise Linux 7.5. This version
features enhanced hybrid cloud security and compliance, improved storage
performance and efficiency, simplified management, and production-ready
Linux containers. RHEL 7.5 is available for x86, IBM Power, IBM z Systems, and 64-bit Arm. This release also brings support for single-host KVM virtualization and Open Container Initiative (OCI)-formatted runtime environment and base image to IBM z Systems.
Security updates have been issued by Arch Linux (bchunk, thunderbird, and xerces-c), Debian (freeplane, icu, libvirt, and net-snmp), Fedora (monitorix, php-simplesamlphp-saml2, php-simplesamlphp-saml2_1, php-simplesamlphp-saml2_3, puppet, and qt5-qtwebengine), openSUSE (curl, libmodplug, libvorbis, mailman, nginx, opera, python-paramiko, and samba, talloc, tevent), Red Hat (python-paramiko, rh-maven35-slf4j, rh-mysql56-mysql, rh-mysql57-mysql, rh-ruby22-ruby, rh-ruby23-ruby, and rh-ruby24-ruby), Slackware (thunderbird), SUSE (clamav, kernel, memcached, and php53), and Ubuntu (samba and tiff).
Security updates have been issued by Arch Linux (clamav, curl, lib32-curl, lib32-libcurl-compat, lib32-libcurl-gnutls, libcurl-compat, and libcurl-gnutls), openSUSE (various KMPs), Oracle (firefox), Scientific Linux (firefox), SUSE (java-1_7_1-ibm), and Ubuntu (memcached).
Security updates have been issued by CentOS (firefox), Debian (clamav and firefox-esr), openSUSE (Chromium and kernel-firmware), Oracle (firefox), Red Hat (ceph), Scientific Linux (firefox), Slackware (curl), and SUSE (java-1_7_1-ibm and mariadb).
Security updates have been issued by Arch Linux (samba), CentOS (389-ds-base, kernel, libreoffice, mailman, and qemu-kvm), Debian (curl, libvirt, and mbedtls), Fedora (advancecomp, ceph, firefox, libldb, postgresql, python-django, and samba), Mageia (clamav, memcached, php, python-django, and zsh), openSUSE (adminer, firefox, java-1_7_0-openjdk, java-1_8_0-openjdk, and postgresql94), Oracle (kernel and libreoffice), Red Hat (erlang, firefox, flash-plugin, and java-1.7.1-ibm), Scientific Linux (389-ds-base, kernel, libreoffice, and qemu-kvm), SUSE (xen), and Ubuntu (curl, firefox, linux, linux-raspi2, and linux-hwe).
Security updates have been issued by Debian (samba), Fedora (tor), openSUSE (glibc, mysql-connector-java, and shadow), Oracle (dhcp), Red Hat (bind, chromium-browser, and dhcp), Scientific Linux (dhcp), and SUSE (java-1_7_0-openjdk, java-1_8_0-ibm, and java-1_8_0-openjdk).
Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/what-john-oliver-gets-wrong-about.html
How Bitcoin works
How does any money have value?
Why Bitcoin conference didn’t take Bitcoin
Turning a McNugget back into a chicken
How to you make money from Bitcoin?
Security updates have been issued by Arch Linux (python-django and python2-django), Debian (leptonlib), Fedora (bugzilla, cryptopp, electrum, firefox, freexl, glibc, jhead, libcdio, libsamplerate, libXcursor, libXfont, libXfont2, mingw-wavpack, nx-libs, php, python-crypto, quagga, sharutils, unzip, x2goserver, and xen), Gentoo (exim), openSUSE (cups, go1.8, ImageMagick, jgraphx, leptonica, openexr, tor, and wavpack), Red Hat (389-ds-base, java-1.7.1-ibm, kernel, kernel-rt, libreoffice, and php), SUSE (java-1_7_1-ibm), and Ubuntu (python-django).
Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/voice-controlled-magnification-glasses/
Go hands-free in the laboratory or makerspace with Mauro Pichiliani’s voice-controlled magnification glasses.
This video presents the project MoveLens: a voice controlled glasses with magnifying lens. It was the my entry for the Voice Activated context on unstructables. Check the step by step guide at Voice Controlled Glasses With Magnifying Lens. Source code: https://github.com/pichiliani/MoveLens Step by Step guide: https://www.instructables.com/id/Voice-Controlled-Glasses-With-Magnifying-Lens/
It’s a kind of magnification
We’ve all been there – that moment when you need another pair of hands to complete a task. And while these glasses may not hold all the answers, they’re a perfect addition to any hobbyist’s arsenal.
Introducing Mauro Pichilliani’s voice-activated glasses: a pair of frames with magnification lenses that can flip up and down in response to a voice command, depending on the task at hand. No more needing to put down your tools in order to put magnifying glasses on. No more trying to re-position a magnifying glass with the back of your left wrist, or getting grease all over your lenses.
As Mauro explains in his tutorial for the glasses:
Many professionals work for many hours looking at very small areas, such as surgeons, watchmakers, jewellery designers and so on. Most of the time these professionals use some kind of magnification glasses that helps them to see better the area they are working with and other tiny items used on the job. The devices that had magnifications lens on a form factor of a glass usually allow the professional to move the lens out of their eye sight, i.e. put aside the lens. However, in some scenarios touching the lens or the glass rim to move away the lens can contaminate the fingers. Also, it is cumbersome and can break the concentration of the professional.
Voice-controlled magnification glasses
Using a Raspberry Pi Zero W, a servo motor, a microphone, and the IBM Watson speech-to-text service, Mauro built a pair of glasses that lets users control the position of the magnification lenses with voice commands.
Mauro started by dismantling a pair of standard magnification glasses in order to modify the lens supports to allow them to move freely. He drilled a hole in one of the lens supports to provide a place to attach the servo, and used lollipop sticks and hot glue to fix the lenses relative to one another, so they would both move together under the control of the servo. Then, he set up a Raspberry Pi Zero, installing Raspbian and software to use a USB microphone; after connecting the servo to the Pi Zero’s GPIO pins, he set up the Watson speech-to-text service.
Finally, he wrote the code to bring the project together. Two Python scripts direct the servo to raise and lower the lenses, and a Node.js script captures audio from the microphone, passes it on to Watson, checks for an “up” or “down” command, and calls the appropriate Python script as required.
You can follow the tutorial on the Instructables website, where Mauro entered the glasses into the Instructables Voice Activated Challenge. And if you’d like to take your first steps into digital making using the Raspberry Pi, take a look at our free online projects.
Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ssd-vs-hdd-future-of-storage/
Customers frequently ask us whether and when we plan to move our cloud backup and data storage to SSDs (Solid-State Drives). That’s not a surprising question considering the many advantages SSDs have over magnetic platter type drives, also known as HDDs (Hard-Disk Drives).
We’re a large user of HDDs in our data centers (currently 100,000 hard drives holding over 500 petabytes of data). We want to provide the best performance, reliability, and economy for our cloud backup and cloud storage services, so we continually evaluate which drives to use for operations and in our data centers. While we use SSDs for some applications, which we’ll describe below, there are reasons why HDDs will continue to be the primary drives of choice for us and other cloud providers for the foreseeable future.
HDDs vs SSDs
HDD vs SSD
The laptop computer I am writing this on has a single 512GB SSD, which has become a common feature in higher end laptops. The SSD’s advantages for a laptop are easy to understand: they are smaller than an HDD, faster, quieter, last longer, and are not susceptible to vibration and magnetic fields. They also have much lower latency and access times.
Today’s typical online price for a 2.5” 512GB SSD is $140 to $170. The typical online price for a 3.5” 512 GB HDD is $44 to $65. That’s a pretty significant difference in price, but since the SSD helps make the laptop lighter, enables it to be more resistant to the inevitable shocks and jolts it will experience in daily use, and adds of benefits of faster booting, faster waking from sleep, and faster launching of applications and handling of big files, the extra cost for the SSD in this case is worth it.
Some of these SSD advantages, chiefly speed, also will apply to a desktop computer, so desktops are increasingly outfitted with SSDs, particularly to hold the operating system, applications, and data that is accessed frequently. Replacing a boot drive with an SSD has become a popular upgrade option to breathe new life into a computer, especially one that seems to take forever to boot or is used for notoriously slow-loading applications such as Photoshop.
We covered upgrading your computer with an SSD in our blog post SSD 101: How to Upgrade Your Computer With An SSD.
Data centers are an entirely different kettle of fish. The primary concerns for data center storage are reliability, storage density, and cost. While SSDs are strong in the first two areas, it’s the third where they are not yet competitive. At Backblaze we adopt higher density HDDs as they become available — we’re currently using both 10TB and 12TB drives (among other capacities) in our data centers. Higher density drives provide greater storage density per Storage Pod and Vault and reduce our overhead cost through less required maintenance and lower total power requirements. Comparable SSDs in those sizes would cost roughly $1,000 per terabyte, considerably higher than the corresponding HDD. Simply put, SSDs are not yet in the price range to make their use economical for the benefits they provide, which is the reason why we expect to be using HDDs as our primary storage media for the foreseeable future.
What Are HDDs?
HDDs have been around over 60 years since IBM introduced them in 1956. The first disk drive was the size of a car, stored a mere 3.75 megabytes, and cost $300,000 in today’s dollars.
IBM 350 Disk Storage System — 3.75MB in 1956
The 350 Disk Storage System was a major component of the IBM 305 RAMAC (Random Access Method of Accounting and Control) system, which was introduced in September 1956. It consisted of 40 platters and a dual read/write head on a single arm that moved up and down the stack of magnetic disk platters.
The basic mechanism of an HDD remains unchanged since then, though it has undergone continual refinement. An HDD uses magnetism to store data on a rotating platter. A read/write head is affixed to an arm that floats above the spinning platter reading and writing data. The faster the platter spins, the faster an HDD can perform. Typical laptop drives today spin at either 5400 RPM (revolutions per minute) or 7200 RPM, though some server-based platters spin at even higher speeds.
Exploded drawing of a hard drive
The platters inside the drives are coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code. If all this sounds like an HDD is vulnerable to shocks and vibration, you’d be right. They also are vulnerable to magnets, which is one way to destroy the data on an HDD if you’re getting rid of it.
The major advantage of an HDD is that it can store lots of data cheaply. One and two terabyte (1,024 and 2,048 gigabytes) hard drives are not unusual for a laptop these days, and 10TB and 12TB drives are now available for desktops and servers. Densities and rotation speeds continue to grow. However, if you compare the cost of common HDDs vs SSDs for sale online, the SSDs are roughly 3-5x the cost per gigabyte. So if you want cheap storage and lots of it, using a standard hard drive is definitely the more economical way to go.
What are the best uses for HDDs?
- Disk arrays (NAS, RAID, etc.) where high capacity is needed
- Desktops when low cost is priority
- Media storage (photos, videos, audio not currently being worked on)
- Drives with extreme number of reads and writes
What Are SSDs?
SSDs go back almost as far as HDDs, with the first semiconductor storage device compatible with a hard drive interface introduced in 1978, the StorageTek 4305.
Storage Technology 4305 SSD
The StorageTek was an SSD aimed at the IBM mainframe compatible market. The STC 4305 was seven times faster than IBM’s popular 2305 HDD system (and also about half the price). It consisted of a cabinet full of charge-coupled devices and cost $400,000 for 45MB capacity with throughput speeds up to 1.5 MB/sec.
SSDs are based on a type of non-volatile memory called NAND (named for the Boolean operator “NOT AND,” and one of two main types of flash memory). Flash memory stores data in individual memory cells, which are made of floating-gate transistors. Though they are semiconductor-based memory, they retain their information when no power is applied to them — a feature that’s obviously a necessity for permanent data storage.
Samsung SSD 850 Pro
Compared to an HDD, SSDs have higher data-transfer rates, higher areal storage density, better reliability, and much lower latency and access times. For most users, it’s the speed of an SSD that primarily attracts them. When discussing the speed of drives, what we are referring to is the speed at which they can read and write data.
For HDDs, the speed at which the platters spin strongly determines the read/write times. When data on an HDD is accessed, the read/write head must physically move to the location where the data was encoded on a magnetic section on the platter. If the file being read was written sequentially to the disk, it will be read quickly. As more data is written to the disk, however, it’s likely that the file will be written across multiple sections, resulting in fragmentation of the data. Fragmented data takes longer to read with an HDD as the read head has to move to different areas of the platter(s) to completely read all the data requested.
Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. Fragmentation is not an issue for SSDs. Files can be written anywhere with little impact on read/write times, resulting in read times far faster than any HDD, regardless of fragmentation.
Samsung SSD 850 Pro (back)
Due to the way data is written and read to the drive, however, SSD cells can wear out over time. SSD cells push electrons through a gate to set its state. This process wears on the cell and over time reduces its performance until the SSD wears out. This effect takes a long time and SSDs have mechanisms to minimize this effect, such as the TRIM command. Flash memory writes an entire block of storage no matter how few pages within the block are updated. This requires reading and caching the existing data, erasing the block and rewriting the block. If an empty block is available, a write operation is much faster. The TRIM command, which must be supported in both the OS and the SSD, enables the OS to inform the drive which blocks are no longer needed. It allows the drive to erase the blocks ahead of time in order to make empty blocks available for subsequent writes.
The effect of repeated reading and erasing on an SSD is cumulative and an SSD can slow down and even display errors with age. It’s more likely, however, that the system using the SSD will be discarded for obsolescence before the SSD begins to display read/write errors. Hard drives eventually wear out from constant use as well, since they use physical recording methods, so most users won’t base their selection of an HDD or SSD drive based on expected longevity.
SSD circuit board
Overall, SSDs are considered far more durable than HDDs due to a lack of mechanical parts. The moving mechanisms within an HDD are susceptible to not only wear and tear over time, but to damage due to movement or forceful contact. If one were to drop a laptop with an HDD, there is a high likelihood that all those moving parts will collide, resulting in potential data loss and even destructive physical damage that could kill the HDD outright. SSDs have no moving parts so, while they hold the risk of a potentially shorter life span due to high use, they can survive the rigors we impose upon our portable devices and laptops.
What are the best uses for SSDs?
- Notebooks, laptops, where performance, lightweight, areal storage density, resistance to shock and general ruggedness are desirable
- Boot drives holding operating system and applications, which will speed up booting and application launching
- Working files (media that is being edited: photos, video, audio, etc.)
- Swap drives where SSD will speed up disk paging
- Cache drives
- Database servers
- Revitalizing an older computer. If you’ve got a computer that seems slow to start up and slow to load applications and files, updating the boot drive with an SSD could make it seem, if not new, at least as if it just came back refreshed from spending some time on the beach.
Stay Tuned for Part 2 of HDD vs SSD
That’s it for part 1. In our second part we’ll take a deeper look at the differences between HDDs and SSDs, how both HDD and SSD technologies are evolving, and how Backblaze takes advantage of SSDs in our operations and data centers.
Here’s a tip on finding all the posts tagged with SSD on our blog. Just follow https://www.backblaze.com/blog/tag/ssd/.
The post HDD vs SSD: What Does the Future for Storage Hold? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.
As we sail past 500 Petabytes of data stored, our Operations Department continues to grow. To that end we’ve added a brand new member to our Operations and Engineering teams, Alex! He straddles the line between Ops and Engineering, working on both internal and external systems – making sure they run smoothly. Lets learn a bit more about Alex, shall we?
What is your Backblaze Title?
Where are you originally from?
What attracted you to Backblaze?
The company mission and overall transparency really appealed to me. It was a great opportunity to work in an environment that aligned with my core values.
What do you expect to learn while being at Backblaze?
I expect to learn more about modern cloud technologies and data center deployments.
Where else have you worked?
I worked at a startup called Cleversafe out of college which was later acquired by IBM.
Where did you go to school?
University of Illinois at Urbana-Champaign
What’s your dream job?
NHL general manager.
Favorite place you’ve traveled?
The Cinque Terre. It’s a set of small towns along the Italian Riviera that has great hiking.
Of what achievement are you most proud?
Graduating college with two engineering degrees.
Star Trek or Star Wars?
Why do you like certain things?
Life’s too short not to like certain things.
Cinque Terre is definitely one of the most beautiful places on earth, as long as you don’t visit on a foggy day! If you happen to find the perfect NHL job, we’ll understand! Oh, and thanks for bringing Cosmo to the office on occasion! Welcome aboard!