Tag Archives: conference

[$] Notes from the LPC tracing microconference

Post Syndicated from corbet original https://lwn.net/Articles/734453/rss

The “tracing and BPF” microconference was held on the final day of the 2017
Linux Plumbers Conference; it covered a number of topics relevant to heavy
users of kernel and user-space tracing. Read on for a summary of a number
of those discussions on topics like BPF introspection, stack traces,
kprobes, uprobes, and the Common Trace Format.

[$] Linking commits to reviews

Post Syndicated from jake original https://lwn.net/Articles/734018/rss

In a talk in the refereed track of the 2017 Linux Plumbers Conference,
Alexandre Courouble presented the email2git tool that
links kernel commits to their review discussion on the mailing lists. Email2git
is a plugin for cregit, which implements token-level history for a Git repository; we covered a talk on cregit just over one year
ago. Email2git combines cregit with Patchwork to link
the commit to a patch and its discussion threads from any of the mailing
lists that are scanned by patchwork.kernel.org. The result
is a way to easily find the discussion that led to a piece of code—or even
just a token—changing in the kernel source tree.

[$] Building the kernel with clang

Post Syndicated from jake original https://lwn.net/Articles/734071/rss

Over the years, there has been a persistent effort to build the Linux
kernel using the Clang C compiler that is part of the LLVM project. We
last looked in on the effort in a report from
the LLVM microconference
at the 2015 Linux Plumbers Conference (LPC), but we
have followed it before that as
well. At this year’s LPC, two Google kernel engineers, Greg Hackmann and
Nick Desaulniers, came to the Android
microconference
to update the status; at this point, it is possible to
build two long-term support kernels (4.4 and 4.9) with Clang.

[$] Testing kernels

Post Syndicated from jake original https://lwn.net/Articles/734016/rss

New kernels are released regularly, but it is not entirely
clear how much in-depth testing they are actually getting. Even the
mainline kernel may not be getting enough of the right kind of testing. That was the
topic for a “birds of a feather” (BoF) meeting at this year’s Linux Plumbers
Conference
(LPC) held in mid-September in Los Angeles, CA.
Dhaval Giani and Sasha Levin organized the BoF as a prelude to the Testing
and Fuzzing microconference
they were leading the next day.

[$] Notes from the LPC scheduler microconference

Post Syndicated from corbet original https://lwn.net/Articles/734039/rss

The scheduler
workloads microconference
at the 2017 Linux Plumbers Conference covered
several aspects of the kernel’s CPU scheduler. While workloads were on the
agenda, so were a rework of the realtime scheduler’s push/pull mechanism, a
distinctly different approach to multi-core scheduling, and the use of
tracing for workload simulation and analysis. As the following summary
shows, CPU scheduling has not yet reached a point where all of the
important questions have been answered.

Backblaze’s Upgrade Guide for macOS High Sierra

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/macos-high-sierra-upgrade-guide/

High Sierra

Apple introduced macOS 10.13 “High Sierra” at its 2017 Worldwide Developers Conference in June. On Tuesday, we learned we don’t have long to wait — the new OS will be available on September 25. It’s a free upgrade, and millions of Mac users around the world will rush to install it.

We understand. A new OS from Apple is exciting, But please, before you upgrade, we want to remind you to back up your Mac. You want your data to be safe from unexpected problems that could happen in the upgrade. We do, too. To make that easier, Backblaze offers this macOS High Sierra upgrade guide.

Why Upgrade to macOS 10.13 High Sierra?

High Sierra, as the name suggests, is a follow-on to the previous macOS, Sierra. Its major focus is on improving the base OS with significant improvements that will support new capabilities in the future in the file system, video, graphics, and virtual/augmented reality.

But don’t despair; there also are outward improvements that will be readily apparent to everyone when they boot the OS for the first time. We’ll cover both the inner and outer improvements coming in this new OS.

Under the Hood of High Sierra

APFS (Apple File System)

Apple has been rolling out its first file system upgrade for a while now. It’s already in iOS: now High Sierra brings APFS to the Mac. Apple touts APFS as a new file system optimized for Flash/SSD storage and featuring strong encryption, better and faster file handling, safer copying and moving of files, and other improved file system fundamentals.

We went into detail about the enhancements and improvements that APFS has over the previous file system, HFS+, in an earlier post. Many of these improvements, including enhanced performance, security and reliability of data, will provide immediate benefits to users, while others provide a foundation for future storage innovations and will require work by Apple and third parties to support in their products and services.

Most of us won’t notice these improvements, but we’ll benefit from better, faster, and safer file handling, which I think all of us can appreciate.

Video

High Sierra includes High Efficiency Video Encoding (HEVC, aka H.265), which preserves better detail and color while also introducing improved compression over H.264 (MPEG-4 AVC). Even existing Macs will benefit from the HEVC software encoding in High Sierra, but newer Mac models include HEVC hardware acceleration for even better performance.

MacBook Pro

Metal 2

macOS High Sierra introduces Metal 2, the next-generation of Apple’s Metal graphics API that was launched three years ago. Apple claims that Metal 2 provides up to 10x better performance in key areas. It provides near-direct access to the graphics processor (GPU), enabling the GPU to take control over key aspects of the rendering pipeline. Metal 2 will enhance the Mac’s capability for machine learning, and is the technology driving the new virtual reality platform on Macs.

audio video editor screenshot

Virtual Reality

We’re about to see an explosion of virtual reality experiences on both the Mac and iOS thanks to High Sierra and iOS 11. Content creators will be able to use apps like Final Cut Pro X, Epic Unreal 4 Editor, and Unity Editor to create fully immersive worlds that will revolutionize entertainment and education and have many professional uses, as well.

Users will want the new iMac with Retina 5K display or the upcoming iMac Pro to enjoy them, or any supported Mac paired with the latest external GPU and VR headset.

iMac and HTC virtual reality player

Outward Improvements

Siri

Siri logo

Expect a more nature voice from Siri in High Sierra. She or he will be less robotic, with greater expression and use of intonation in speech. Siri will also learn more about your preferences in things like music, helping you choose music that fits your taste and putting together playlists expressly for you. Expect Siri to be able to answer your questions about music-related trivia, as well.

Siri:  what does “scaramouche” refer to in the song Bohemian Rhapsody?

Photos

HD MacBook Pro screenshot

Photos has been redesigned with a new layout and new tools. A redesigned Edit view includes new tools for fine-tuning color and contrast and making adjustments within a defined color range. Some fun elements for creating special effects and memories also have been added. Photos now works with external apps such as Photoshop and Pixelmator. Compatibility with third-party extension adds printing and publishing services to help get your photos out into the world.

Safari

Safari logo

Apple claims that Safari in High Sierra is the world’s fastest desktop browser, outperforming Chrome and other browsers in a range of benchmark tests. They’ve also added autoplay blocking for those pesky videos that play without your permission and tracking blocking to help protect your privacy.

Can My Mac Run macOS High Sierra 10.13?

All Macs introduced in mid 2010 or later are compatible. MacBook and iMac computers introduced in late 2009 are also compatible. You’ll need OS X 10.7.5 “Lion” or later installed, along with at least 2 GB RAM and 8.8 GB of available storage to manage the upgrade.
Some features of High Sierra require an internet connection or an Apple ID. You can check to see if your Mac is compatible with High Sierra on Apple’s website.

Conquering High Sierra — What Do I Do Before I Upgrade?

Back Up That Mac!

It’s always smart to back up before you upgrade the operating system or make any other crucial changes to your computer. Upgrading your OS is a major change to your computer, and if anything goes wrong…well, you don’t want that to happen.

iMac backup screenshot

We recommend the 3-2-1 Backup Strategy to make sure your data is safe. What does that mean? Have three copies of your data. There’s the “live” version on your Mac, a local backup (Time Machine, another copy on a local drive or other computer), and an offsite backup like Backblaze. No matter what happens to your computer, you’ll have a way to restore the files if anything goes wrong. Need help understanding how to back up your Mac? We have you covered with a handy Mac backup guide.

Check for App and Driver Updates

This is when it helps to do your homework. Check with app developers or device manufacturers to find if their apps and devices have updates to work with High Sierra. Visit their websites or use the Check for Updates feature built into most apps (often found in the File or Help menus).

If you’ve downloaded apps through the Mac App Store, make sure to open them and click on the Updates button to download the latest updates.

Updating can be hit or miss when you’ve installed apps that didn’t come from the Mac App Store. To make it easier, visit the MacUpdate website. MacUpdate tracks changes to thousands of Mac apps.


Will Backblaze work with macOS High Sierra?

Yes. We’ve taken care to ensure that Backblaze works with High Sierra. We’ve already enhanced our Macintosh client to report the space available on an APFS container and we plan to add additional support for APFS capabilities that enhance Backblaze’s capabilities in the future.

Of course, we’ll watch Apple’s release carefully for any last minute surprises. We’ll officially offer support for High Sierra once we’ve had a chance to thoroughly test the release version.


Set Aside Time for the Upgrade

Depending on the speed of your Internet connection and your computer, upgrading to High Sierra will take some time. You’ll be able to use your Mac straightaway after answering a few questions at the end of the upgrade process.

If you’re going to install High Sierra on multiple Macs, a time-and-bandwidth-saving tip came from a Backblaze customer who suggested copying the installer from your Mac’s Applications folder to a USB Flash drive (or an external drive) before you run it. The installer routinely deletes itself once the upgrade process is completed, but if you grab it before that happens you can use it on other computers.

Where Do I get High Sierra?

Apple says that High Sierra will be available on September 25. Like other Mac operating system releases, Apple offers macOS 10.13 High Sierra for download from the Mac App Store, which is included on the Mac. As long as your Mac is supported and running OS X 10.7.5 “Lion” (released in 2012) or later, you can download and run the installer. It’s free. Thank you, Apple.

Better to be Safe than Sorry

Back up your Mac before doing anything to it, and make Backblaze part of your 3-2-1 backup strategy. That way your data is secure. Even if you have to roll back after an upgrade, or if you run into other problems, your data will be safe and sound in your backup.

Tell us How it Went

Are you getting ready to install High Sierra? Still have questions? Let us know in the comments. Tell us how your update went and what you like about the new release of macOS.

And While You’re Waiting for High Sierra…

While you’re waiting for Apple to release High Sierra on September 25, you might want to check out these other posts about using your Mac and Backblaze.

The post Backblaze’s Upgrade Guide for macOS High Sierra appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security Flaw in Estonian National ID Card

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/security_flaw_i.html

We have no idea how bad this really is:

On 30 August, an international team of researchers informed the Estonian Information System Authority (RIA) of a vulnerability potentially affecting the digital use of Estonian ID cards. The possible vulnerability affects a total of almost 750,000 ID-cards issued starting from October 2014, including cards issued to e-residents. The ID-cards issued before 16 October 2014 use a different chip and are not affected. Mobile-IDs are also not impacted.

My guess is that it’s worse than the politicians are saying:

According to Peterkop, the current data shows this risk to be theoretical and there is no evidence of anyone’s digital identity being misused. “All ID-card operations are still valid and we will take appropriate actions to secure the functioning of our national digital-ID infrastructure. For example, we have restricted the access to Estonian ID-card public key database to prevent illegal use.”

And because this system is so important in local politics, the effects are significant:

In the light of current events, some Estonian politicians called to postpone the upcoming local elections, due to take place on 16 October. In Estonia, approximately 35% of the voters use digital identity to vote online.

But the Estonian prime minister, Jüri Ratas, said at a press conference on 5 September that “this incident will not affect the course of the Estonian e-state.” Ratas also recommended to use Mobile-IDs where possible. The prime minister said that the State Electoral Office will decide whether it will allow the usage of ID cards at the upcoming local elections.

The Estonian Police and Border Guard estimates it will take approximately two months to fix the issue with faulty cards. The authority will involve as many Estonian experts as possible in the process.

This is exactly the sort of thing I worry about as ID systems become more prevalent and more centralized. Anyone want to place bets on whether a foreign country is going to try to hack the next Estonian election?

Another article.

State of MAC address randomization

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/state-of-mac-address-randomization.html

tldr: I went to DragonCon, a conference of 85,000 people, so sniff WiFi packets and test how many phones now uses MAC address randomization. Almost all iPhones nowadays do, but it seems only a third of Android phones do.

Ten years ago at BlackHat, we presented the “data seepage” problem, how the broadcasts from your devices allow you to be tracked. Among the things we highlighted was how WiFi probes looking to connect to access-points expose the unique hardware address burned into the phone, the MAC address. This hardware address is unique to your phone, shared by no other device in the world. Evildoers, such as the NSA or GRU, could install passive listening devices in airports and train-stations around the world in order to track your movements. This could be done with $25 devices sprinkled around a few thousand places — within the budget of not only a police state, but also the average hacker.

In 2014, with the release of iOS 8, Apple addressed this problem by randomizing the MAC address. Every time you restart your phone, it picks a new, random, hardware address for connecting to WiFi. This causes a few problems: every time you restart your iOS devices, your home network sees a completely new device, which can fill up your router’s connection table. Since that table usually has at least 100 entries, this shouldn’t be a problem for your home, but corporations and other owners of big networks saw their connection tables suddenly get big with iOS 8.

In 2015, Google added the feature to Android as well. However, even though most Android phones today support this feature in theory, it’s usually not enabled.

Recently, I went to DragonCon in order to test out how well this works. DragonCon is a huge sci-fi/fantasy conference in Atlanta in August, second to San Diego’s ComicCon in popularity. It’s spread across several neighboring hotels in the downtown area. A lot of the traffic funnels through the Marriot Marquis hotel, which has a large open area where, from above, you can see thousands of people at a time.

And, with a laptop, see their broadcast packets.

So I went up on a higher floor and setup my laptop in order to capture “probe” broadcasts coming from phones, in order to record the hardware MAC addresses. I’ve done this in years past, before address randomization, in order to record the popularity of iPhones. The first three bytes of an old-style, non-randomized address, identifies the manufacturer. This time, I should see a lot fewer manufacturer IDs, and mostly just random addresses instead.

I recorded 9,095 unique probes over a couple hours. I’m not sure exactly how long — my laptop would go to sleep occasionally because of lack of activity on the keyboard. I should probably setup a Raspberry Pi somewhere next year to get a more consistent result.

A quick summary of the results are:

The 9,000 devices were split almost evenly between Apple and Android. Almost all of the Apple devices randomized their addresses. About a third of the Android devices randomized. (This assumes Android only randomizes the final 3 bytes of the address, and that Apple randomizes all 6 bytes — my assumption may be wrong).

A table of the major results are below. A little explanation:

  • The first item in the table is the number of phones that randomized the full 6 bytes of the MAC address. I’m guessing these are either mostly or all Apple iOS devices. They are nearly half of the total, or 4498 out of 9095 unique probes.
  • The second number is those that randomized the final 3 bytes of the MAC address, but left the first three bytes identifying themselves as Android devices. I’m guessing this represents all the Android devices that randomize. My guesses may be wrong, maybe some Androids randomize the full 6 bytes, which would get them counted in the first number.
  • The following numbers are phones from major Android manufacturers like Motorola, LG, HTC, Huawei, OnePlus, ZTE. Remember: the first 3 bytes of an un-randomized address identifies who made it. There are roughly 2500 of these devices.
  • There is a count for 309 Apple devices. These are either older iOS devices pre iOS 8, or which have turned off the feature (some corporations demand this), or which are actually MacBooks instead of phones.
  • The vendor of the access-points that Marriot uses is “Ruckus”. There have a lot of access-points in the hotel.
  • The “TCT mobile” entry is actually BlackBerry. Apparently, BlackBerry stopped making phones and instead just licenses the software/brand to other hardware makers. If you buy a BlackBerry from the phone store, it’s likely going to be a TCT phone instead.
  • I’m assuming the “Amazon” devices are Kindle ebooks.
  • Lastly, I’d like to point out the two records for “Ford”. I was capturing while walking out of the building, I think I got a few cars driving by.

(random)  4498
(Android)  1562
Samsung  646
Motorola  579
Murata  505
LG  412
Apple  309
HTC-phone  226
Huawei  66
Ruckus  60
OnePlus Tec  40
ZTE  23
TCT mobile  20
Amazon Tech  19
Nintendo  17
Intel  14
Microsoft  9
-hp-  8
BLU Product  8
Kyocera  8
AsusTek  6
Yulong Comp  6
Lite-On  4
Sony Mobile  4
Z-COM, INC.  4
ARRIS Group  2
AzureWave  2
Barnes&Nobl  2
Canon  2
Ford Motor  2
Foxconn  2
Google, Inc  2
Motorola (W  2
Sonos, Inc.  2
SparkLAN Co  2
Wi2Wi, Inc  2
Xiaomi Comm  2
Alps Electr  1
Askey  1
BlackBerry  1
Chi Mei Com  1
Clover Netw  1
CNet Techno  1
eSSys Co.,L  1
GoPro  1
InPro Comm  1
JJPlus Corp  1
Private  1
Quanta  1
Raspberry P  1
Roku, Inc.  1
Sonim Techn  1
Texas Instr  1
TP-LINK TEC  1
Vizio, Inc  1

All Systems Go! 2017 CfP Closes Soon!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-cfp-closes-soon.html

The All Systems Go! 2017 Call for Participation is Closing on September 3rd!

Please make sure to get your presentation proprosals forAll Systems Go! 2017 in now! The CfP closes on sunday!

In case you haven’t heard about All Systems Go! yet, here’s a quick reminder what kind of conference it is, and why you should attend and speak there:

All Systems Go! is an Open Source community conference focused
on the projects and technologies at the foundation of modern Linux
systems — specifically low-level user-space technologies. Its goal is
to provide a friendly and collaborative gathering place for
individuals and communities working to push these technologies
forward. All Systems Go! 2017 takes place in Berlin,
Germany
on October 21st+22nd. All Systems Go! is a
2-day event with 2-3 talks happening in parallel. Full presentation
slots are 30-45 minutes in length and lightning talk slots are 5-10
minutes.

In particular, we are looking for sessions including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things,
talks about kernel projects are welcome too, as long as they have a
clear and direct relevance for user-space.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

systemd.conf will not take place this year in lieu of All
Systems Go!
. All Systems Go! welcomes all projects that
contribute to Linux user space, which, of course, includes
systemd. Thus, anything you think was appropriate for submission to
systemd.conf is also fitting for All Systems Go!

Hard Drive Stats for Q2 2017

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-failure-stats-q2-2017/

Backblaze Drive Stats Q2 2017

In this update, we’ll review the Q2 2017 and lifetime hard drive failure rates for all our current drive models. We also look at how our drive migration strategy is changing the drives we use and we’ll check in on our enterprise class drives to see how they are doing. Along the way we’ll share our observations and insights and as always we welcome your comments and critiques.

Since our last report for Q1 2017, we have added 635 additional hard drives to bring us to the 83,151 drives we’ll focus on. In Q1 we added over 10,000 new drives to the mix, so adding just 635 in Q2 seems “odd.” In fact, we added 4,921 new drives and retired 4,286 old drives as we migrated from lower density drives to higher density drives. We cover more about migrations later on, but first let’s look at the Q2 quarterly stats.

Hard Drive Stats for Q2 2017

We’ll begin our review by looking at the statistics for the period of April 1, 2017 through June 30, 2017 (Q2 2017). This table includes 17 different 3 ½” drive models that were operational during the indicated period, ranging in size from 3 to 8 TB.

Quarterly Hard Drive Failure Rates for Q2 2017

When looking at the quarterly numbers, remember to look for those drives with at least 50,000 drive hours for the quarter. That works out to about 550 drives running the entire quarter. That’s a good sample size. If the sample size is below that, the failure rates can be skewed based on a small change in the number of drive failures.

As noted previously, we use the quarterly numbers to look for trends. So this time we’ve included a trend indicator in the table. The “Q2Q Trend” column is short for quarter-to-quarter trend, i.e. last quarter to this quarter. We can add, change, or delete trend columns depending on community interest. Let us know what you think in the comments.

Good Migrations

In Q2 we continued with our data migration program. For us, a drive migration means we intentionally remove a good drive from service and replace it with another drive. Drives that are removed via migrations are not counted as failed. Once they are removed they stop accumulating drive hours and other stats in our system.

There are three primary drivers for our migration program.

  1. Increase Storage Density – For example, in Q3 we replaced 3 TB drives with 8 TB drives, more than doubling the amount of storage in a given Storage Pod for the same footprint. The cost of electricity was nominally more with the 8 TB drives, but the increase in density more than offset the additional cost. For those interested you can read more about the cost of cloud storage here.
  2. Backblaze Vaults – Our Vault architecture has proven to be more cost effective over the past two years than using stand-alone Storage Pods. A major goal of the migration program is to have the entire Backblaze cloud deployed on the highly efficient and resilient Backblaze Vault architecture.
  3. Balancing the Load – With our Phoenix data center online and accepting data, we have migrated some systems to the Phoenix DC. Don’t worry, we didn’t put your data on a truck and drive it to Phoenix. We simply built new systems there and transferred the data from our Northern California DC. In the process, we are gaining valuable insights as we move towards being able to replicate data between the two data centers.
During Q2 we migrated nearly 30 Petabytes of data.

During Q2 we migrated the data on 155 systems, giving nearly 30 petabytes of data a new, more durable, place to call home. There are still 644 individual Storage Pods (Storage Pod Classics, as we call them) left to migrate to the Backblaze Vault architecture.

Just in case you don’t know, a Backblaze Vault is a logical collection of 20 beefy Storage Pods (not Classics). Using our own Reed-Solomon erasure coding library, data is spread out across the 20 Pods into 17 data shards and 3 parity shards. The data and parity shards of each arriving data blob can be stored on different Storage Pods in a given Backblaze Vault.

Lifetime Hard Drive Failure Rates for Current Drives

The table below shows the failure rates for the hard drive models we had in service as of June 30, 2017. This is over the period beginning in April 2013 and ending June 30, 2017. If you are interested in the hard drive failure rates for all the hard drives we’ve used over the years, please refer to our 2016 hard drive review.

Cumulative Hard Drive Failure Rates

Enterprise vs Consumer Drives

We added 3,595 enterprise class 8 TB drives in Q2 bringing our total to 6,054 drives. You may be tempted to compare the failure rates of the 8 TB enterprise drive (model: ST8000NM005) to the consumer 8 TB drive (model: ST8000DM002), and conclude the enterprise drives fail at a higher rate. Let’s not jump to that conclusion yet, as the average operational age of the enterprise drives is only 2.11 months.

There are some insights we can gain from the current data. The enterprise drives have 363,282 drives hours and an annualized failure rate of 1.61%. If we look back at our data, we find that as of Q3 2016, the 8 TB consumer drives had 422,263 drive hours with an annualized failure rate of 1.60%. That means that when both drive models had a similar number of drive hours, they had nearly the same annualized failure rate. There are no conclusions to be made here, but the observation is worth considering as we gather data for our comparison.

Next quarter, we should have enough data to compare the 8 TB drives, but by then the 8TB drives could be “antiques.” In the next week or so, we’ll be installing 12 TB hard drives in a Backblaze Vault. Each 60-drive Storage Pod in the Vault would have 720 TB of storage available and a 20-pod Backblaze Vault would have 14.4 petabytes of raw storage.

Better Late Than Never

Sorry for being a bit late with the hard drive stats report this quarter. We were ready to go last week, then this happened. Some folks here thought that was more important than our Q2 Hard Drive Stats. Go figure.

Drive Stats at the Storage Developers Conference

We will be presenting at the Storage Developers Conference in Santa Clara on Monday September 11th at 8:30am. We’ll be reviewing our drive stats along with some interesting observations from the SMART stats we also collect. The conference is the leading event for technical discussions and education on the latest storage technologies and standards. Come join us.

The Data For This Review

If you are interested in the data from the two tables in this review, you can download an Excel spreadsheet containing the two tables. Note: the domain for this download will be f001.backblazeb2.com.

You also can download the entire data set we use for these reports from our Hard Drive Test Data page. You can download and use this data for free for your own purposes. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone. It is free.

Good luck, and let us know if you find anything interesting.

The post Hard Drive Stats for Q2 2017 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Empowerment, Engagement, and Education for Women in Tech

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/empowerment-engagement-and-education-for-women-in-tech/

I’ve been earning a living in the technology industry since 1977, when I worked in one of the first computer stores in the country as a teenager. Looking back over the past 40 years, and realizing that the Altair, IMSAI, Sol-20, and North Star Horizon machines that I learned about, built, debugged, programmed, sold, and supported can now be seen in museums (Seattle’s own Living Computer Museum is one of the best), helps me to appreciate that the world I live in changes quickly, and to understand that I need to do the same. This applies to technology, to people, and to attitudes.

I lived in a suburb of Boston in my early teens. At that time, diversity meant that one person in my public school had come all the way from (gasp) England a few years earlier. When I went to college I began to meet people from other countries and continents and to appreciate the fresh vantage points and approaches that they brought to the workplace and to the problems that we tackled together.

Back in those days, there were virtually no women working as software engineers, managers, or entrepreneurs. Although the computer store was owned by a couple and the wife did all of the management, this was the exception rather than the rule at that time, and for too many years after that. Today, I am happy to be part of a team that brings together the most capable people, regardless of their gender, race, background, or anything other than their ability to do a kick-ass job (Ana, Tara, Randall, Tina, Devin, and Sara, I’m talking about all of you).

We want to do all that we can to encourage young women to prepare to become the next generation of engineers, managers, and entrepreneurs. AWS is proud to support Girls Who Code (including the Summer Immersion Program), Girls in Tech, and other organizations supporting women and underrepresented communities in tech. I sincerely believe that these organizations will be able to move the needle in the right direction. However, like any large-scale social change, this is going to take some time with results visible in years and decades, and only with support & participation from those of us already in the industry.

In conjunction with me&Eve, we were able to speak with some of the attendees at the most recent Girls in Tech Catalyst conference (that’s our booth in the picture). Click through to see what the attendees had to say:

I’m happy to be part of an organization that supports such a worthwhile cause, and that challenges us to make our organization ever-more diverse. While reviewing this post with my colleagues I learned about We Power Tech, an AWS program designed to build skills and foster community and to provide access to Amazon executives who are qualified to speak about the program and about diversity. In conjunction with our friends at Accenture, we have assembled a strong Diversity at re:Invent program.

Jeff;

PS – I did my best to convince Ana, Tara, Tina, or Sara to write this post. Tara finally won the day when she told me “You have raised girls into women, and you are passionate in seeing them succeed in their chose fields with respect and equity. Your post conveying that could be powerful.”

NetDev 2.2 registration is now open

Post Syndicated from jake original https://lwn.net/Articles/731573/rss

The registration for the NetDev 2.2 networking conference is now open. It will be held in Seoul, Korea November 8-10. As usual, it will be preceded by the invitation-only Netconf for core kernel networking hackers. “Netdev 2.2 is a community-driven conference geared towards Linux netheads. Linux kernel networking and user space utilization of the interfaces to the Linux kernel networking subsystem are the focus. If you are using Linux as a boot system for proprietary networking, then this conference _may not be for you_.” LWN covered these conferences in 2016 and earlier this year; with luck, we will cover these upcoming conferences as well.

All Systems Go! 2017 Speakers

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-speakers.html

The All Systems Go! 2017 Headline Speakers Announced!

Don’t forget to send in your submissions to the All Systems Go! 2017 CfP! Proposals are accepted until September 3rd!

A couple of headline speakers have been announced now:

  • Alban Crequy (Kinvolk)
  • Brian “Redbeard” Harrington (CoreOS)
  • Gianluca Borello (Sysdig)
  • Jon Boulle (NStack/CoreOS)
  • Martin Pitt (Debian)
  • Thomas Graf (covalent.io/Cilium)
  • Vincent Batts (Red Hat/OCI)
  • (and yours truly)

These folks will also review your submissions as part of the papers committee!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

EFF: Bassel Khartabil, In Memoriam

Post Syndicated from ris original https://lwn.net/Articles/729644/rss

The Electronic Frontier Foundation reports
that Bassel Khartabil, Syrian open source developer, blogger,
entrepreneur, hackerspace founder, and free culture advocate, was executed
by the Syrian authorities. “Bassel was a central figure in the
global free culture movement, connecting it and promoting it to Syria’s
emerging tech community as it existed before the country was ransacked by
civil war. He co-founded Aiki Lab, Syria’s first hackerspace, in Damascus
in 2010. He was a contributor to Mozilla’s Firefox browser and the Syrian
lead for Creative Commons. His influence went beyond Syria, however: he was
a key attendee at the Middle East’s bloggers’ conferences, and played a
vital role in the negotiations in Doha in 2010 that led to a common
language for discussing fair use and copyright across the Arab-speaking
world.
” (Thanks to Paul Wise)

Is DefCon Wifi safe?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/is-defcon-wifi-safe.html

DEF CON is the largest U.S. hacker conference that takes place every summer in Las Vegas. It offers WiFi service. Is it safe?

Probably.

The trick is that you need to download the certificate from https://wifireg.defcon.org and import it into your computer. They have instructions for all your various operating systems. For macOS, it was as simple as downloading “dc25.mobileconfig” and importing it.

I haven’t validated the DefCon team did the right thing for all platforms, but I know that safety is possible. If a hacker could easily hack into arbitrary WiFi, then equipment vendors would fix it. Corporations widely use WiFi — they couldn’t do this if it weren’t safe.

The first step in safety is encryption, obviously. WPA does encryption well, you you are good there.

The second step is authentication — proving that the access-point is who it says it is. Otherwise, somebody could setup their own access-point claiming to be “DefCon”, and you’d happily connect to it. Encrypted connect to the evil access-point doesn’t help you. This is what the certificate you download does — you import it into your system, so that you’ll trust only the “DefCon” access-point that has the private key.

That’s not to say you are completely safe. There’s a known vulnerability for the Broadcom WiFi chip imbedded in many devices, including iPhone and Android phones. If you have one of these devices, you should either upgrade your software with a fix or disable WiFi.

There may also be unknown vulnerabilities in WiFi stacks. the Broadcom bug shows that after a couple decades, we still haven’t solved the problem of simple buffer overflows in WiFi stacks/drivers. Thus, some hacker may have an unknown 0day vulnerability they are using to hack you.

Of course, this can apply to any WiFi usage anywhere. Frankly, if I had such an 0day, I wouldn’t use it at DefCon. Along with black-hat hackers DefCon is full of white-hat researchers monitoring the WiFi — looking for hackers using exploits. They are likely to discover the 0day and report it. Thus, I’d rather use such 0-days in international airpots, catching business types, getting into their company secrets. Or, targeting government types.

So it’s impossible to guarantee any security. But what the DefCon network team bas done looks right, the same sort of thing corporations do to secure themselves, so you are probably secure.

On the other hand, don’t use “DefCon-Open” — not only is it insecure, there are explicitly a ton of hackers spying on it at the “Wall of Sheep” to point out the “sheep” who don’t secure their passwords.

Introducing Our Content Director: Roderick

Post Syndicated from Yev original https://www.backblaze.com/blog/introducing-content-director-roderick/

As Backblaze continues to grow, and as we go down the path of sharing our stories, we found ourselves in need of someone that could wrangle our content calendar, write blog posts, and come up with interesting ideas that we could share with our readers and fans. We put out the call, and found Roderick! As you’ll read below he has an incredibly interesting history, and we’re thrilled to have his perspective join our marketing team! Lets learn a bit more about Roderick, shall we?

What is your Backblaze Title?
Content Director

Where are you originally from?
I was born in Southern California, but have lived a lot of different places, including Alaska, Washington, Oregon, Texas, New Mexico, Austria, and Italy.

What attracted you to Backblaze?
I met Gleb a number of years ago at the Failcon Conference in San Francisco. I spoke with him and was impressed with him and his description of the company. We connected on LinkedIn after the conference and I ultimately saw his post for this position about a month ago.

What do you expect to learn while being at Backblaze?
I hope to learn about Backblaze’s customers and dive deep into the latest in cloud storage and other technologies. I also hope to get to know my fellow employees.

Where else have you worked?
I’ve worked for Microsoft, Adobe, Autodesk, and a few startups. I’ve also consulted to Apple, HP, Stanford, the White House, and startups in the U.S. and abroad. I mentored at incubators in Silicon Valley, including IndieBio and Founders Space. I used to own vineyards and a food education and event center in the Napa Valley with my former wife, and worked in a number of restaurants, hotels, and wineries. Recently, I taught part-time at the Culinary Institute of America at Greystone in the Napa Valley. I’ve been a partner in a restaurant and currently am a partner in a mozzarella di bufala company in Marin county where we have about 50 water buffalo that are amazing animals. They are named after famous rock and roll vocalists. Our most active studs now are Sting and Van Morrison. I think singing “a fantabulous night to make romance ‘neath the cover of October skies” works for Van.

Where did you go to school?
I studied at Reed College, U.C. Berkeley, U.C. Davis, and the Università per Stranieri di Perugia in Italy. I put myself through college so was in and out of school a number of times to make money. Some of the jobs I held to earn money for college were cook, waiter, dishwasher, bartender, courier, teacher, bookstore clerk, head of hotel maintenance, bookkeeper, lifeguard, journalist, and commercial salmon fisherman in Alaska.

What’s your dream job?
I think my dream would be having a job that would continually allow me to learn new things and meet new challenges. I love to learn, travel, and be surprised by things I don’t know.

I love animals and sometimes think I should have become a veterinarian.

Favorite place you’ve traveled?
I lived and studied in Italy, and would have to say the Umbria region of Italy is perhaps my favorite place. I also worked in my father’s home country of Austria, which is incredibly beautiful.

Favorite hobby?
I love foreign languages, and have studied Italian, French, German, and a few others. I am a big fan of literature and theatre and read widely and have attended theatre productions all over the world. That was my motivation to learn other languages—so I could enjoy literature and theatre in the languages they were written in. I started scuba diving when I was very young because I wanted to be Jacques-Yves Cousteau and explore the oceans. I also sail, motorcycle, ski, bicycle, hike, play music, and hope to finish my pilot’s license someday.

Coke or Pepsi?
Red Burgundy

Favorite food?
Both my parents are chefs, so I was exposed to a lot of great food growing up. I would have to give more than one answer to that question: fresh baked bread and bouillabaisse. Oh, and white truffles.

Not sure we’ll be able to stock our cupboards with Red Burgundy, but we’ll see what our office admin can do! Welcome to the team!

The post Introducing Our Content Director: Roderick appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

timeShift(GrafanaBuzz, 1w) Issue 5

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/21/timeshiftgrafanabuzz-1w-issue-5/

We cover a lot of ground in this week’s timeShift. From diving into building your own plugin, finding the right dashboard, configuration options in the alerting feature, to monitoring your local weather, there’s something for everyone. Are you writing an article about Grafana, or have you come across an article you found interesting? Please get in touch, we’ll add it to our roundup.


From the Blogosphere

  • Going open-source in monitoring, part III: 10 most useful Grafana dashboards to monitor Kubernetes and services: We have hundreds of pre-made dashboards ready for you to install into your on-prem or hosted Grafana, but not every one will fit your specific monitoring needs. In part three of the series, Sergey discusses is experiences with finding useful dashboards and shows off ten of the best dashboards you can install for monitoring Kubernetes clusters and the services deployed on them.

  • Using AWS Lambda and API gateway for server-less Grafana adapters: Sometimes you’ll want to visualize metrics from a data source that may not yet be supported in Grafana natively. With the plugin functionality introduced in Grafana 3.0, anyone can create their own data sources. Using the SimpleJson data source, Jonas describes how he used AWS Lambda and AWS API gateway to write data source adapters for Grafana.

  • How to Use Grafana to Monitor JMeter Non-GUI Results – Part 2: A few issues ago we listed an article for using Grafana to monitor JMeter Non-GUI results, which required a number of non-trivial steps to complete. This article shows of an easier way to accomplish this that doesn’t require any additional configuration of InfluxDB.

  • Programming your Personal Weather Chart: It’s always great to see Grafana used outside of the typical dev-ops usecase. This article runs you through the steps to create your own weather chart and show off your local weather stats in Grafana. BONUS: Rob shows off a magic mirror he created, which can display this data.

  • vSphere Performance data – Part 6 – The Dashboard(s): This 6-part series goes into a ton of detail and walks you through the various methods of retrieving vSphere performance data, storing the data in a TSDB, and creating dashboards for the metrics. Part 6 deals specifically with Grafana, but I highly recommend reading all of the articles, as it chronicles the journey of metrics exploration, storage, and visualization from someone who had no prior experience with time series data.

  • Alerting in Grafana: Alerting in Grafana is a fairly new feature and one that we’re continuing to iterate on. We’re soon adding additional data source support, new notification channels, clustering, silencing rules, and more. This article steps you through all the configuration options to get you to your first alert.


Plugins and Dashboards

It can seem like work slows during July and August, but we’re still seeing a lot of activity in the community. This week we have a new graph panel to show off that gives you some unique looking dashboards, and an update to the Zabbix data source, which adds some really great features. You can install both of the plugins now on your on-prem Grafana via our cli, or with one-click on GrafanaCloud.

NEW PLUGIN

Bubble Chart Panel This super-cool looking panel groups your tag values into clusters of circles. The size of the circle represents the aggregated value of the time series data. There are also multiple color schemes to make those bubbles POP (pun intended)! Currently it works against OpenTSDB and Bosun, so give it a try!

Install Now

UPDATED PLUGIN

Zabbix Alex has been hard at work, making improvements on the Zabbix App for Grafana. This update adds annotations, template variables, alerting and more. Thanks Alex! If you’d like to try out the app, head over to http://play.grafana-zabbix.org/dashboard/db/zabbix-db-mysql?orgId=2

Install 3.5.1 Now


This week’s MVC (Most Valuable Contributor)

Open source software can’t thrive without the contributions from the community. Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback.

mk-dhia (Dhia)
Thank you so much for your improvements to the Elasticsearch data source!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

This week’s tweet comes from @geek_dave

Great looking dashboard Dave! And thank you for adding new features and keeping it updated. It’s creators like you who make the dashboard repository so awesome!


Upcoming Events

We love when people talk about Grafana at meetups and conferences.

Monday, July 24, 2017 – 7:30pm | Google Campus Warsaw


Ząbkowska 27/31, Warsaw, Poland

Iot & HOME AUTOMATION #3 openHAB, InfluxDB, Grafana:
If you are interested in topics of the internet of things and home automation, this might be a good occasion to meet people similar to you. If you are into it, we will also show you how we can all work together on our common projects.

RSVP


Tell us how we’re Doing.

We’d love your feedback on what kind of content you like, length, format, etc – so please keep the comments coming! You can submit a comment on this article below, or post something at our community forum. Help us make this better.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Hightail — Empowering Creative Collaboration in the Cloud

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/hightail-empowering-creative-collaboration-in-the-cloud/

Hightail – formerly YouSendIt – streamlines how creative work is reviewed, improved, and approved by helping more than 50 million professionals around the world get great content in front of their audiences faster. Since its debut in 2004 as a file sharing company, Hightail shifted its strategic direction to focus on delivering value-added creative collaboTagsration services and boasts a strong lineup of name-brand customers.

In today’s guest post, Hightail’s SVP of Technology Shiva Paranandi tells the company’s migration story, moving petabytes of data from on-premises to the cloud. He highlights their cloud vendor evaluation process and reasons for going all-in on AWS.


Hightail started as a way to help people easily share and store large files, but has since evolved into a creative collaboration tool. We became a place where users could not only control and share their digital assets, but also assemble their creative teams, connect with clients, develop creative workflows, and manage projects from start to finish. We now power collaboration services for major brands such as Lionsgate and Jimmy Kimmel Live!. With a growing list of domestic and international clients, we required more internal focus on product development and serving the users. We found that running our own data centers consumed more time, money, and manpower than we were willing to devote.

We needed an approach that would help us iterate more rapidly to meet customer needs and dramatically improve our time to market. We wanted to reduce data center costs and have the flexibility to scale up quickly in any given region around the globe. Setting up a data center in a new location took so long that it was limiting the pace of growth that we could achieve. In addition, we were tired of buying ahead of our needs, which meant we had storage capacity that we did not even use. We required a storage solution that was both tiered and highly scalable to reduce costs by allowing us to keep infrequently used data in inactive storage while also allowing us to resurface it quickly at the customer’s request. Our main drivers were agility and innovation, and the cloud enables these in a significant way. Given that, we decided to adopt a cloud-first policy that would enable us to spend time and money on initiatives that differentiate our business, instead of putting resources into managing our storage and computing infrastructure.

Comparing AWS Against Cloud Competitors

To kick off the migration, we did our due diligence by evaluating a variety of cloud vendors, including AWS, Google, IBM, and Microsoft. AWS stuck out as the clear winner for us. At one point, we considered combining services from multiple cloud providers to meet our needs, but decided the best route was to use AWS exclusively. When we factored in training, synchronization, support, and system availability along with other migration and management elements, it was just not practical to take a multi-cloud approach. With the best cost savings and an unmatched ecosystem of partner solutions, we did not need anyone else and chose to go all-in on AWS.

By migrating to AWS, we were able to secure the lowest cost-per-gigabyte pricing, gain access to a rich ecosystem, quickly develop in-house talent, and maintain SOC II compliance. The ecosystem was particularly important to us and set AWS apart from its competitors with its expansive list of partners. In fact, all the vendors we depend on for services such as previewing images, encoding videos, and serving up presentations were already a part of the network so we were easily able to leverage our existing investments and expertise. If we went with a different provider, it would have meant moving away from a platform that was already working so well for which was not the desired outcome for us. Also, the amount of talent we were able to build up in house on AWS technologies was astounding. Training our internal team to work with AWS was a simple process using available tools such as AWS conferences, training materials, and support.

Migrating Petabytes of Data

Going with AWS made things easier. In many instances, it gave us better functionality than what we were using in house. We moved multiple petabytes of data from on-premises storage to AWS with ease. AWS gave us great speeds with Direct Connect, so we were able to push all the data in a little more than three months with no user impact. We employed AWS Key Management Service to keep our data secure, which eased our minds through the move. We performed extensive QA testing before flipping users over to ensure low customer impact, using methods such as checksums between our data center and the data that got pushed to AWS.

Our new platform on AWS has greatly improved our user experience. We have seen huge improvement in reliability, performance, and uptime—all critical in our line of business. We are now able to achieve upload and download speeds up to 17 times faster than our previous data centers, and uptime has increased by orders of magnitude. Also, the time it takes us to deploy services to a new region has been cut by more than 90%. It used to take us at least six months to get a new region online, and now we can get a region up and running in less than three weeks. On AWS, we can even replicate data at the bucket level across regions for disaster recovery purposes.

To cut costs, we were successfully able to divide our storage infrastructure into frequently and infrequently accessed data. Tiered storage in Amazon S3 has been a huge advantage, allowing us to optimize our storage costs so we have more to invest in product development. We can now move data from inactive to active tiers instantly to meet customer needs and eliminated the need to overprovision our storage infrastructure. It is refreshing to see services automatically scale up or down during peak load times, and know that we are only paying for what we need.

Overall, we achieved our key strategic goal of focusing more on development and less on infrastructure. Our migration felt seamless, and the progress we were able to share is a true testament to how easy it has been for us to run our workloads on AWS. We attribute part of our successful migration to the dedicated support provided by the AWS team. They were pretty awesome. We had a couple of their technicians available 24/7 via chat, which proved to be essential during this large-scale migration.

-Shiva Paranandi, SVP of Technology at Hightail

Learning More

Learn more about cost-effective tiered data storage with Amazon S3, or dive deeper into our AWS Partner Ecosystem to see which solutions could best serve the needs of your company.