FOSDEM Talk on Video

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/fosdem2011-video.html

If you have already watched my presentation on
systemd I gave at linux.conf.au 2011
then this video of my talk on
the same topic which I have gave at FOSDEM
2011
in Brussels, Belgium will probably not be all new to you, but the
questions from the audience (and hopefully my responses) might answer a
question or two you might still have. So do watch it:

Hmm, seems p.g.o strips the video from the blog post. So either read the original blog story or watch it directly on YouTube.

Oh, and FOSDEM rocked, like every year!

FOSDEM Talk on Video

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/fosdem2011-video.html

If you have already watched my presentation on
systemd I gave at linux.conf.au 2011
then this video of my talk on
the same topic which I have gave at FOSDEM
2011
in Brussels, Belgium will probably not be all new to you, but the
questions from the audience (and hopefully my responses) might answer a
question or two you might still have. So do watch it:

Hmm, seems p.g.o strips the video from the blog post. So either read the original blog story or watch it directly on YouTube.

Oh, and FOSDEM rocked, like every year!

Everyone in USA: Comment against ACTA today!

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/02/15/acta.html

In the USA, the deadline for comments on ACTA
is today (Tuesday 15 February 2011) at 17:00 US/Eastern.
It’s absolutely imperative that every USA citizen submit a comment on
this. The Free
Software Foundation has details on how to do so
.

ACTA is a dangerous international agreement that would establish
additional criminal penalties, promulgate DMCA/EUCD-like legislation
around the world, and otherwise extend copyright law into places it
should not go. Copyright law is already much stronger than
anyone needs.

On a meta-point, it’s extremely important that USA citizens participate
in comment processes like this. The reason that things like ACTA can
happen in the USA is because most of the citizens don’t pay attention.
By way of hyperbolic fantasy, imagine if every citizen of the
USA wrote a letter today to Mr. McCoy about ACTA. It’d be a news story
on all the major news networks tonight, and would probably be in the
headlines in print/online news stories tomorrow. Our whole country
would suddenly be debating whether or not we should have criminal
penalties for copying TV shows, and whether breaking a DVD’s DRM should
be illegal.

Obviously, that fantasy won’t happen, but getting from where we are to
that wonderful fantasy is actually linear; each person who
writes to Mr. McCoy today makes a difference! Please take 15 minutes
out of your day today and do so. It’s the least you can do on this
issue.

The Free
Software Foundation has a sample letter you can use
if you don’t
have time to write your own. I wrote my own, giving some of my unique
perspective, which I include below.

The automated
system on regulations.gov
assigned this comment below the tracking
number of 80bef9a1 (cool, it’s in hex! 🙂

Stanford K. McCoy
Assistant U.S. Trade Representative for Intellectual Property and Innovation
Office of the United States Trade Representative
600 17th St NW
Washington, DC 20006

Re: ACTA Public Comments (Docket no. USTR-2010-0014)

Dear Mr. McCoy:

I am a USA citizen writing to urge that the USA not sign
ACTA. Copyright law already reaches too far. ACTA would extend
problematic, overly-broad copyright rules around the world and would
increase the already inappropriate criminal penalties for copyright
infringement here in the USA.

Both individually and as an agent of my employer, I am regularly involved
in copyright enforcement efforts to defend the Free Software license
called the GNU General Public License (GPL). I therefore think my
perspective can be uniquely contrasted with other copyright holders who
support ACTA.

Specifically, when engaging in copyright enforcement for the GPL, we treat
it as purely a civil issue, not a criminal one. We have been successful
in defending the rights of software authors in this regard without the
need for criminal penalties for the rampant copyright infringement that we
often encounter.

I realize that many powerful corporate copyright holders wish to see
criminal penalties for copyright infringement expanded. As someone who
has worked in the area of copyright enforcement regularly for 12 years, I
see absolutely no reason that any copyright infringement of any kind ever
should be considered a criminal matter. Copyright holders who believe
their rights have been infringed have the full power of civil law to
defend their rights. Using the power of government to impose criminal
penalties for copyright infringement is an inappropriate use of government
to interfere in civil disputes between its citizens.

Finally, ACTA would introduce new barriers for those of us trying to
change our copyright law here in the USA. The USA should neither impose
its desired copyright regime on other countries, nor should the USA bind
itself in international agreements on an issue where its citizens are in
great disagreement about correct policy.

Thank you for considering my opinion, and please do not allow the USA to
sign ACTA.

Sincerely,
Bradley M. Kuhn

How the President’s Security Motorcade Works

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/t78uFcBrjVM/how-presidents-security-motorcade-works.html

Jalopnik links to The Atlantic’s Marc Ambinder’s great article on how the Secret Service handles a significant event, including details of how the motorcade is organized and run. For those who think about physical security, this is an interesting read including a diagram of each vehicle and its role.

_uacct = “UA-1423386-1”;
urchinTracker();

How the President’s Security Motorcade Works

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/t78uFcBrjVM/how-presidents-security-motorcade-works.html

Jalopnik links to The Atlantic’s Marc Ambinder’s great article on how the Secret Service handles a significant event, including details of how the motorcade is organized and run. For those who think about physical security, this is an interesting read including a diagram of each vehicle and its role.

_uacct = “UA-1423386-1”;
urchinTracker();

LCA Talk on Video

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/lca2011-video.html

I won’t spare you the video of my talk about systemd at linux.conf.au 2011 in Brisbane, Australia last week:

Hmm, seems p.g.o strips the video from the blog post. So either read the original blog story or watch it directly on blip.tv.

LCA was fantastic and especially impressive given the circumstances of the recent floodings in Queensland. Really good conference, and congratulations to the organizers!

LCA Talk on Video

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/lca2011-video.html

I won’t spare you the video of my talk about systemd at linux.conf.au 2011 in Brisbane, Australia last week:

Hmm, seems p.g.o strips the video from the blog post. So either read the original blog story or watch it directly on blip.tv.

LCA was fantastic and especially impressive given the circumstances of the recent floodings in Queensland. Really good conference, and congratulations to the organizers!

Лалугери

Post Syndicated from RealEnder original http://alex.stanev.org/blog/?p=274

В края на миналата година приятели ми разказаха какви проблеми им се струпали на главите – в едно сливенско село и околията изведнъж плъзнали лалугери. Наглед миловидните създания унищожавали наред реколтата, разравяли градините и прочие. Честа тема в селската кръчма става обмяната на опит за отблъскването на вредителите и хитроумни методи за унищожението им, граничещи с идеи, взаимствани от филми на ужасите.
По същото време, докато хората се чудят “От де дойде таз напаст божия!?”, попаднах в публичния интерфейс на ИСУН на следния проект: Възстановяване популацията на лалугера като основен елемент за поддържане на благоприятния консервационен статус на приоритетни тревни хабитати и популации на хищни птици в природен парк сините камъни. Развива си се съвсем успешно и в същия район.
Хит!
Сега остава двете групи да се срещнат. Добре е да се подсигурят полиция и линейки за желаещите да ги използват в последствие 😉

FOSDEM Interview with Yours Truly

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/fosdem2011.html

The FOSDEM organizers just published a brief interview with yours
truly
regarding the presentation
about systemd
I will be giving there
on Sat. Feb. 5th,
3pm
. If you come to Brussels make sure to drop by! And even
if you don’t have a look on the interview!

If you don’t make it to Brussels, there are two more stops in my
little systemd World Tour in the next weeks: today
(Wed. Jan. 26th,
2:30pm
) I will be speaking at linux.conf.au in Brisbane,
Australia. And
on Fri. Feb. 11th,
1:20pm
I’ll be speaking at the Red Hat Developer Conference in
Brno, Czech Republic.

FOSDEM Interview with Yours Truly

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/fosdem2011.html

The FOSDEM organizers just published a brief interview with yours
truly
regarding the presentation
about systemd
I will be giving there
on Sat. Feb. 5th,
3pm
. If you come to Brussels make sure to drop by! And even
if you don’t have a look on the interview!

If you don’t make it to Brussels, there are two more stops in my
little systemd World Tour in the next weeks: today
(Wed. Jan. 26th,
2:30pm
) I will be speaking at linux.conf.au in Brisbane,
Australia. And
on Fri. Feb. 11th,
1:20pm
I’ll be speaking at the Red Hat Developer Conference in
Brno, Czech Republic.

A Brief Tutorial on a Shared Git Repository

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/01/23/git-shared-repository-tutorial.html

A while ago, I set up Git for a group privately sharing the same
central repository. Specifically, this is a tutorial for those who would
want to have a Git setup that is a little bit like a SVN repository: a
central repository that has all the branches that matter published there
in one repository. I found this file today floating in a directory of
“thing I should publish at some point”, so I decided just to
put it up, as every time I came across this file, it reminded me I should
put this up and it’s really morally wrong (IMO) to keep generally useful
technical information private, even when it’s only laziness that’s causing
it.

Before you read this, note that most developers don’t use Git this way,
particularly with the advent of shared
hosting facilities like Gitorious
, as systems like Gitorious solve the
weirdness of problems that this tutorial addresses. When I originally
wrote this (more than a year ago), the only well-known project that I
found using a system like this was Samba; I haven’t seen a lot of other
projects that do this. Indeed, this process is not really what Git is
designed to do, but sometimes groups that are used to SVN expect there to be
a “canonical repository” that has all the contents of the
shared work under one proverbial roof, and set up a “one true Git
repository” for the project from which everyone clones.

Thus, this tutorial is primarily targeted to a user mostly familiar
with an SVN workflow, that has ssh access to
host.example.org that has a writable (usually by multiple people)
Git repository living in the directory
/git/REPOSITORY.git/.

Ultimately, The stuff that I’ve documented herein is basically to fill
in the gaps that I found when reading the following tutorials:

The Git Crash Course for
SVN Users
. (NOTE: some things in that tutorial, where the author says
various commands are equivalent to various svn commands are misleading.
It’d better said: if you do foo in Git, it will feel like you were
using svn and did bar.)

Linux
Developers’ Git Manual

The
Official Git Tutorial
.

A Collection site for Git
Documentation
(which includes some links already here).

The
Samba guys use Git very similarly to the method
discussed in this
tutorial.

markpasc points
out there is no svn cp equivalent in git
.

reinh’s
blog post is the first thing I ever read that explained git push
particularly well
.

So, here’s my tutorial, FWIW. (I apologize that I make the mortal sin
of tutorial writing: I drift wildly between second-person-singular,
first-person-plural, and passive-voice third-person. If someone sends
me a patch to the HTML file that fixes this, I’ll fix it. 🙂

Initial Setup

Before you start using git, you should run these commands to let it
know who you are so your info appears correctly in commit logs:

$ git config –global user.email [email protected]
$ git config –global user.name “Your Real Name”

Examining Your First Clone

To get started, first we clone the repository:

$ git clone ssh://host.example.org/git/REPOSITORY.git/

Now, note that Git almost always operates in the terms of
branches. Unlike Subversion, Git’s branches are first-class citizens and
most operations in Git operate around a branch. The default branch is
often called “master”, although I tend to avoid using the
master branch for much, mainly because everyone who uses git has a
different perception of what the master branch should embody. Therefore,
giving all your branches more descriptive name is helpful. But, when you
first import something into git, (for example, from existing Subversion
trees), everything from Subversion’s trunk is thrown on the master
branch.

So, we take a look at the result of that clone command. We have a new
directory, called REPOSITORY, that contains a “working
checkout&rquo; of the repository, and under that there is one special
directory, REPOSITORY/.git/, which is a full copy of the repository. Note
that this is not like Subversion, where what you have on your local
machine is merely one view of the repository. With Git, you have a full
copy of everything. However, an interesting thing has been done on your
copy with the branches. You can take a look with these commands:

$ git branch
* master
$ git branch -r
origin/HEAD
origin/master

The first list of branches are the branches that are personal and local
to you. (By default, git branch uses the -l option,
which shows you only “local” branches; -r means
“remote” branches. You can also use -a to see all of
them.) Unless you take action to publish your local branches in some way,
they will be your private area to work in and live only on your
computer. (And be aware: they are not backed up unless you back them up!)
The remote ones, that all start with “origin/” track the
progress on the shared repository.

(Note the term “origin” is a standard way of referring to
“the repository from whence you cloned”, and
origin/BRANCH refers to “BRANCH as it looks in the
repository from whence you cloned”. However, there is nothing
magical about the name “origin”. It’s set up to DTRT in your
WORKING-DIRECTORY/.git/config file, and the clone command set it
all up for you, which is why you have them now.)

Get to Work

The canonical way to “get moving” with a new task in Git is
to somehow create a branch for it. Branches are designed to be cheap and
quick to create so that users will not be shy about creating a new one.
Naming conventions are your own, but generally I like to call a
branch USERNAME/TASK when I’m still not sure exactly what I’ll be
doing with it (i.e., who I will publish it to, etc.) You can always merge
it back into another branch, or copy it to another branch (perhaps using a
more formal name) later.

Where do you Start Your Branch From?

Once a repository exists, each branch in the repository comes from
somewhere — it has a parent. These relationships help Git know how
to easily merge branches together. So, the most typical procedure of
starting a new branch of your own is to begin with an existing branch.
The git checkout command is the easiest to use to start this:

git checkout -b USERNAME/feature origin/master

In this example, we’ve created our own local branch, called
USERNAME/feature, and it’s started from the current state
of origin/master. When you are getting started, you will
probably usually want to always base your new branches off of ones that
exist on the origin. This isn’t a rule, it’s just less confusing
for a newbie if all your branches have a parent revision that live on the
server.

Now, it’s important to note here that no branch stands still. It’s
best to think about a branch as a “moving pointer” to a linked
list of some set of revisions in the repository.

Every revision stored in git, local or remote, has a SHA1 which is
computed based on the revisions before it plus new patch the revision just
applied.

Meanwhile, the only two substantive differences between one of these
SHA1 identifiers and an actual branch is that (a) Git keeps changing what
identifier the branch refers to as new commits come in (aka it moves the
branch’s HEAD), and (b) Git keeps track of the history of identifiers the
branch previously referred to.

So, above, when we asked git checkout to creat a new branch called
USERNAME/feature based on origin/master, the two
important things to realize are that (a) your new branch has its HEAD
pointing at the same head that is currently the HEAD of
origin/master, and (b) you got a new list to start adding
revisions in the new branch.

We didn’t have to use branch for that. We could have simply started
our branch from any old SHA1 of any revision. We happened to want to
declare a relationship with the master branch on the server in
this case, but we could have easily picked any SHA1 from our git log and
used that one.

Do Not Fear the checkout

Every time you run a git checkout SOMETHING command, your
entire working directory changes. This normally scares Subversion users;
it certainly scared me the first time I used git checkout
SOMETHING. But, the only reason it is scary is because svn
switch, which is the roughly analogous command in the Subversion
world, so often doesn’t do something sane with your working copy. By
contrast, switching branches and changing your whole working directory is
a common occurrence with git.

Note, however, that you cannot do git checkout with
uncommitted changes in your directory (which, BTW, also makes it safer
than svn switch). However, don’t be too Subversion-user-like and
therefore afraid to commit things. Remember, with Git (and unlike with
Subversion), committing and publishing are two different operations. You
can commit to your heart’s content on local branches and merge or push
into public branches later. (There are even commands to squash many
commits into one before putting it on a public branch, in case you don’t
want people to see all the intermediate goofiness you might have done.
This is why, BTW, many Git users commit as often as an SVN user would save
in their editors.)

However, if you must switch checkouts but really do fear making
commits, there is a tool for you: look into git stash.

Share with the Group

Once you’ve been doing some work, you’ll end up with some useful work
finished on a USERNAME/feature branch. As noted before, this is
your own private branch. You probably want to use the shared repository
to make your work available to others.

When using a shared Git repository, there are two ways to share your
branches with your colleagues. The first procedure is when you simply
want to publish directly on an existing branch. The second is when you
wish to create your own branch.

Publishing to Existing Branch

You may choose to merge your work directly into a known branch on the
remote repository. That’s a viable option, certainly, but often you want
to make it available on a separate branch for others to examine, even
before you merge it into something like the master branch.
We discuss the slightly more complicated new branch publication next, but
for the moment, we can consider the quicker process of publishing to an
existing branch.

Let’s consider when we have work on USERNAME/feature and we
would like to make it available on the master branch. Make sure
your USERNAME/feature branch is clean (i.e., all your changes are
committed).

The first thing you should verify is that you have what I call a
“local tracking branch” (this is my own term that I made up, I
think, you won’t likely see it in other documentation) that is tied
directly with the same name to the origin. This is not completely
necessary, but is much more convenient to keep track of what you are
doing. To check, do a:

$ git branch -a
* USERNAME/feature
master
origin/master

In the list, you should see both master and
origin/master. If you don’t have that, you should create it
with:

$ git checkout -b master origin/master

So, either way, you wan to be on the master branch. To get
there if it already existed, you can run:

$ git checkout master

And you should be able verify that you are now on master with:

$ git branch
* master

Now, we’re ready to merge in our changes:

$ git merge USERNAME/feature
Updating ded2fb3..9b1c0c9
Fast forward
FILE …
N files changed, X insertions(+), Y deletions(-)

If you don’t get any message about conflicts, everything is fine. Your
changes from USERNAME/feature are now on master. Next,
we publish it to the shared repository:

$ git push
Counting objects: N, done.
Compressing objects: 100% (A/A), done.
Writing objects: 100% (A/A), XXX bytes, done.
Total G (delta T), reused 0 (delta 0)
refs/heads/master: IDENTIFIER_X -> IDENTIFIER_Y
To ssh://host.example.org/git/REPOSITORY.git
X..Y master -> master

Your changes can now be seen by others when they git pull (See
below for details).

Publishing to a New Branch

Suppose, what you wanted to instead of immediately putting the feature
on the master branch, you wanted to simply mirror your personal
feature branch to the rest of your colleagues so they can try it out
before it officially becomes part of master. To do that, first,
you need tell Git we want to make a new branch on the shared repository.
In this case, you do have to use the git push command as
well. (It is a catch-all command for any operations you want to do to the
remote repository without actually logging into the server where the
shared Git repository is hosted. Thus, Not surprisingly, nearly any
git push commands you can think of will require you to be
net.connected.)

So, first let’s create a local branch that has the actual name we want
to use publicly. To do this, we’ll just use the checkout command, because
it’s the most convenient and quick way to create a local branch from an
already existing local branch:

$ git branch -l
* USERNAME/feature
master

$ git checkout -b proposed-feature USERNAME/feature
Switched to a new branch “proposed-feature”
$ git branch -l
* proposed-feature
USERNAME/feature
master

Now, again, we’ve only created this branch locally. We need an
equivalent branch on the server, too. This is where git push comes in:

$ git push origin proposed-feature:refs/heads/proposed-feature

Let’s break that command down. The first argument for push is always
“the place you are pushing to”. That can be any sort of git
URL, including ssh://, http://, or git://. However, remember that the
original clone operation set up this shorthand “origin” to
refer to the place from whence we cloned. We’ll use that shorthand here
so we don’t have to type out that big long URL.

The second argument is a colon-separated item. The left hand side is
the local branch we’re pushing from on our local repository, and
the right hand side is the branch we are pushing to on the remote
repository.

(BTW, I have no idea why refs/heads/ is necessary. It seems
you should be able to say proposed-feature:proposed-feature and git would
figure out what you mean. But, in the setups I’ve worked with, it doesn’t
usually work if you don’t put in refs/heads/.)

That operation will take a bit to run, but when it is done we see
something like:

Counting objects: 35, done.
Compressing objects: 100% (31/31), done.
Writing objects: 100% (33/33), 9.44 MiB | 262 KiB/s, done.
Total 33 (delta 1), reused 27 (delta 0)
refs/heads/proposed-feature: 0000000000000000000000000000000000000000
-> CURRENT_HEAD_SHA1_SUM
To ssh://host.example.org/git/REPOSITORY.git/
* [new branch] proposed-feature -> proposed-feature

In older Git clients, you may not see that last line, and you won’t get
the origin/proposed-feature branch until you do a subsequent pull. I
believe newer git clients do the pull automatically for you.

Reconfiguring Your Client to see the New Remote Branch

Annoyingly, as the creator of the branch, we have some extra config
work to do to officially tell our repository copy that these two branches
should be linked. Git didn’t know from our single git push
command that our repository’s relationship with that remote branch was
going to be a long term thing. To marry our local to
origin/proposed-feature to a local branch, we must use the
commands:

$ git config branch.proposed-feature.remote origin
$ git config branch.proposed-feature.merge refs/heads/proposed-feature

We can see that this branch now exists because we find:

$ git branch -a
* proposed-feature
USERNAME/feature
master
origin/HEAD
origin/proposed-feature
origin/master

After this is done, the remote repository has a
proposed-feature branch and, locally, we have a
proposed-feature branch that is a “local tracking
branch” of origin/proposed-feature. Note that
our USERNAME/feature, where all this stuff started from, is
still around too, but can be deleted with:

git branch -d USERNAME/feature

Finding It Elsewhere

Meanwhile, someone else who has separately cloned the repository before
we did this won’t see these changes automatically, but a simple git
pull command can get it:

$ git pull
remote: Generating pack…
remote: Done counting 35 objects.
remote: Result has 33 objects.
remote: Deltifying 33 objects…
remote: 100% (33/33) done
remote: Total 33 (delta 1), reused 27 (delta 0)
Unpacking objects: 100% (33/33), done.
From ssh://host.example.org/git/REPOSITORY.git
* [new branch] proposed-feature -> origin/proposed-feature
Already up-to-date.
$ git branch -a
* master
origin/HEAD
origin/proposed-feature
origin/master

However, their checkout directory won’t be updated to show the changes
until they make a local “mirror” branch to show them the
changes. Usually, this would be done with:

$ git checkout -b proposed-feature origin/proposed-feature

Then they’ll have a working copy with all the data and a local branch
to work on.

BTW, if you want to try this yourself just to see how it works, you can
always make another clone in some other director just to play with, by
doing something like:

$ git clone ssh://host.example.org/git/SOME-REPOSITORY.git/
extra-clone-for-git-didactic-purposes

Now on this secondary checkout (which makes you just like the user who
is not the creator of the new branch), work can be pushed and pulled on
that branch easily. Namely, anything you merge into or commit on your
local proposed-feature branch will automatically be pushed to
origin/proposed-feature on the server when you git push. And,
anything that shows up from other users on the origin/proposed-feature
branch will show up when you do a git pull. These two branches were paired
together from the start.

Irrational Rebased Fears

When using a shared repository like this, it’s generally the case that
git rebase usually screws something up. When Git is used in the
“normal way”, rebase is one of the amazing things about Git.
The rebase idea is: you unwind the entire work you’ve done on one of your
local branches, bringing in changes that other people have made in the
meantime, and then reapply your changes on top of them.

It works out great when you use Git the way the Linux Project does.
However, if you use a single, shared repository in a work group, rebase
can be dangerous.

Generally speaking, though, with a shared repository, you can
use git merge and won’t need rebasing. My usual work flow is
that I get started on a feature with:

$ git checkout -b bkuhn/new-feature starting-branch

I work work work away on it. Then, when it’s ready, I send a patch around
to a mailing list that I generate with:

$ git diff $(git merge-base starting-branch bkuhn/new-feature) bkuhn/new-feature

Note that the thing in the $() returns a single identifier for a
version, namely, the version of the fork point between starting-branch and
bkuhn/new-feature. Therefore, the diff output is just the stuff I’ve
actually changed. This generates all the differences between the place
where I forked and my current work.

Once I have discussed and decided with my co-developers that we like
what I’ve done, I do this:

$ git checkout starting-branch
$ git merge bkuhn/new-feature

If all went well, this should automatically commit my feature into
starting-branch. Usually, there is also an origin/starting-branch, which
I’ve probably set up for automatic push/pull with my local
starting-branch, so I then can make the change officially by running:

$ git push

The fact that I avoid rebase is probably merely FUD, and if I learned
more, I could use it safely in cases with shared repository. But I have
no advice on how to make it work. In
particular, this
Git FAQ entry
shows quite clearly that my work sequence ceases to work
all that well when you do a rebase — namely, doing a git
push becomes more complicated.

I am sure a rebase would easily become very necessary if I lived on
bkuhn/new-feature for a long time and there had been tons of changes
underneath me, but I generally try not to dive to deep into a fork,
although many people love DVCS because they can do just that. YMMV,
etc.

Free as in Freedom, Episode 0x07

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/01/18/faif-0x07.html

I realized that I should start regularly noting here on my blog when
the oggcast that I co-host with Karen Sandler is released. There are
perhaps folks who want content from my blog but haven’t subscribed to
the RSS feed of the show, and thus might want to know when new episodes
come out. If this annoys people reading this blog, please let me know
via email or identica.

In particular, perhaps readers won’t like that, in these posts (which
are going to be written after the show), I’m likely to drift off into
topics beyond what was talked about on the show, and there may be
“spoilers” for the oggcast in them. Again, if this annoys
you (or if you like it) please let me know.

Today’s
FaiF episode is
entitled Revoked?
. The main issue of discussion
is some
recent confusions
about
the GPLv2 release of
WinMTR
. I
was quoted
in an article about the topic as well
, and in the oggcast we discuss
this issue at length.

To summarize my primary point in the oggcast: I’m often troubled when
these issues come up, because I’ve seen these types of confusions so
many times before in the last decade. (I’ve seen this particular one,
almost exactly like this, at least five times.) I believe that those of
us who focus on policy issues in software freedom need to do a better
job documenting these sorts of issues.

Meanwhile, after we recorded the show I was thinking again about how Karen points out in the oggcast that the primary issues are
legal ones. I don’t really agree with that. These are policy
questions, that are perhaps informed by legal analysis, and it’s policy
folks (and, specifically, Free Software project leaders) that should be
guiding the discussion, not necessarily lawyers.

That’s not to say that lawyers can’t be policy folks as well; I
actually think Karen and a few other lawyers I know are both. The
problem is that if we simply take things like GPL on their face —
as if they are unchanging laws of nature that simply need to be
interpreted — we miss out on the fact that licenses, too, can have
bugs and can fail to work the way that they should. A lawyer’s job is
typically to look at a license, or a law, or something more or less
fixed in its existence and explain how it works, and perhaps argue for a
particular position of how it should be understood.

In our community, activists and project leaders who set (or influence)
policy should take such interpretations as input, and output plans to
either change the licenses and interpretation to make sure they properly
match the goals of software freedom, or to build up standards and
practices that work within the existing licensing and legal structure to
advance the goal of building a world where all published software is Free
Software.

So, those are a few thoughts I had after recording; be sure to
listen to FaiF
0x07
available
in ogg
and mp3
formats.

Google Cache Prowling and Useful Firefox Security Plugins

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/fm5-ct_b1cI/google-cache-prowling-and-useful.html

I find that I often check Google’s cache of sites that have been taken down, either as part of an incident investigation or to verify that data has been removed. One of the nicer ways to do this is using a tool like Jeffrey To‘s Google Cache Continue script.This joins my toolbox of existing Firefox plugins such as: NoScript – script blocking URLParams – website get/post parameters Firebug – editing of pages, including Javascript variablesFoxyProxy – a Firefox based proxy switcher that works very well with web app testing tools. IETab – to pop an IE tab into a Firefox testing session Leet Key – for Base64, Hex, BIN, and other transforms ShowIP – shows the current site’s actual IP address, as well as enabling a number of other useful host lookup tools. You can find a whole relation mapped list of Firefox plugins in the FireCAT listing – enjoy!

_uacct = “UA-1423386-1”;
urchinTracker();

Google Cache Prowling and Useful Firefox Security Plugins

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/fm5-ct_b1cI/google-cache-prowling-and-useful.html

I find that I often check Google’s cache of sites that have been taken down, either as part of an incident investigation or to verify that data has been removed. One of the nicer ways to do this is using a tool like Jeffrey To‘s Google Cache Continue script.This joins my toolbox of existing Firefox plugins such as: NoScript – script blocking URLParams – website get/post parameters Firebug – editing of pages, including Javascript variablesFoxyProxy – a Firefox based proxy switcher that works very well with web app testing tools. IETab – to pop an IE tab into a Firefox testing session Leet Key – for Base64, Hex, BIN, and other transforms ShowIP – shows the current site’s actual IP address, as well as enabling a number of other useful host lookup tools. You can find a whole relation mapped list of Firefox plugins in the FireCAT listing – enjoy!

_uacct = “UA-1423386-1”;
urchinTracker();

Security Humor: DMC-Eh?

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/UfC4-7tKooU/security-humor-dmc-eh.html

A recent take-down notice (which was of course sent to the wrong address) contained what has to be one of the best typos I’ve ever seen in such a missive:”Hereby we inform you that the material listed hereunder are of Phonographic nature and are deemed harmful to minors by many governments and non-governmental organizations.”We all knew the Internet was full of phonographic material, right?Flickr Creative Commons attribution licensed image courtesy cristinabe

_uacct = “UA-1423386-1”;
urchinTracker();

Security Humor: DMC-Eh?

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/UfC4-7tKooU/security-humor-dmc-eh.html

A recent take-down notice (which was of course sent to the wrong address) contained what has to be one of the best typos I’ve ever seen in such a missive:”Hereby we inform you that the material listed hereunder are of Phonographic nature and are deemed harmful to minors by many governments and non-governmental organizations.”We all knew the Internet was full of phonographic material, right?Flickr Creative Commons attribution licensed image courtesy cristinabe

_uacct = “UA-1423386-1”;
urchinTracker();

Conservancy Activity Summary, 2010-10-01 to 2010-12-31

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/01/02/conservancy-1.html

[ Crossposted
from Conservancy’s
blog
. ]

I had hoped to blog more regularly about my work at Conservancy, and
hopefully I’ll do better in the coming year. But now seems a good time
to summarize what has happened with Conservancy since I started my
full-time volunteer stint as Executive Director from 2010-10-01 until
2010-12-31.

New Members

We excitedly announced in the last few months two new Conservancy
member
projects, PyPy
and Git.
Thinking of PyPy connects me back to my roots in Computer Science: in
graduate school, I focused on research about programming language
infrastructure and, in particular, virtual machines and language
runtimes. PyPy is a project that connects Conservancy to lots of
exciting programming language research work of that nature, and I’m glad
they’ve joined.

For its part, Git rounds out a group of three DVCS projects that are
now Conservancy members; Conservancy is now the home of Darcs, Git, and
Mercurial. Amusingly, when I reminded the Git developers when they
applied that their “competition” were members, the Git
developers told me that they were inspired to apply because these other
DVCS’ had been happy in Conservancy. That’s a reminder that the
software freedom community remains a place where projects — even
that might seem on the surface as competitors — seek to get along
and work together whenever possible. I’m glad Conservancy now hosts all
these projects together.

Meanwhile, I remain in active discussions with five projects that have
been offered membership in Conservancy. As I always tell new projects,
joining Conservancy is a big step for a project, so it often takes time
for communities to discuss the details of Conservancy’s Fiscal
Sponsorship Agreement. It may be some time before these five projects
join, and perhaps they’ll ultimately decide not to join. However, I’ll
continue to help them make the right decision for their project, even if
joining a different fiscal sponsor (or not joining one at all) is the
ultimately right choice.

Also, about once every two weeks, another inquiry about joining
Conservancy comes in. We won’t be able to accept all the projects that
are interested, but hopefully many can become members of
Conservancy.

Annual Filings

In the late fall, I finished up Conservancy’s 2010 filings. Annual
filings for a non-profit can be an administrative rat-hole at times, but
the level of transparency they create for an organization makes them worth
it.
Conservancy’s FY
2009 Federal Form 990

and FY
2009 New York CHAR-500
are up
on Conservancy’s filing
page
. I always make the filings available on our own website; I wish
other non-profits would do this. It’s so annoying to have to go to a
third-party source to grab these documents. (Although New York State, to
its credit, makes all
the NY
NPO filings available on its website
.)

Conservancy filed a Form 990-EZ in FY 2009. If you take a look, I’d
encourage you to direct the most attention to Part III (which is on the
top of page 2) to see most of Conservancy’s program activities between
2008-03-01 to 2009-02-28.

In FY 2010, Conservancy will move from the New York State requirement
of “limited financial review” to “full audit“
(see page 4 of the CHAR-500 for the level requirements). Conservancy
had so little funds in FY 2007 that it wasn’t required to file a Form 990 at all.
Now, just three years later, there is enough revenue to warrant a full
audit. However, I’ve already begun preparing myself for all the
administrative work that will entail.

Project Growth and Funding

Those increases in revenue are related to growth in many of
Conservancy’s projects. 2010 marked the beginning of the first
full-time funding of a developer by Conservancy. Specifically, since
June, Matt
Mackall has been funded through directed donations to Conservancy to
work full-time on Mercurial
.
Matt blogs once a month (under
topic of Mercurial Fellowship Update)
about his work,
but, more directly,
the hundreds
of changesets that Matt’s committed really show
the advantages of
funding projects through Conservancy.

Conservancy is also collecting donations and managing funding for
various part-time development initiatives by many developers.
Developers of jQuery, Sugar Labs, and Twisted have all recently received
regular development funding through Conservancy. An important part of
my job is making sure these developers receive funding and report the
work clearly and fully to the community of donors (and the general
public) that fund this work.

But, as usual with Conservancy, it’s handling of the “many little
things” for projects that make a big difference and sometimes
takes the most time. In late 2010, Conservancy handled funding for Code
Sprints and conferences for
the Mercurial, Darcs,
and jQuery. In addition, jQuery
held a conference in
Boston in October
, for which Conservancy handled all the financial
details. I was fortunate to be able to attend the conference and meet
many of the jQuery developers in person for the first time. Wine also
held their annual conference in November 2010, and Conservancy handled
the venue details and reimbursements to many of travelers to the
conference.

Also, as always, Conservancy project contributors regularly attend
other conferences related to their projects. At least a few times a
month, Conservancy reimburses developers for travel to speak and attend
important conferences related to their projects.

Google Summer of Code

Since its inception, Google’s Summer of Code (SoC) program has been one
of the most important philanthropy programs for Open Source and Free
Software projects. In 2010, eight Conservancy projects (and 5% of the
entire SoC program) participated in SoC. The SoC program funds college
students for the summer to contribute to the projects, and an
experienced contributor to project mentors each student. A $500 stipend
is paid to the non-profit organization of the project for each project
contributor who mentors a student.

Furthermore, there’s an annual conference, in October, of all the
mentors, with travel funded by Google. This is a really valuable
conference, since it’s one of the few places where very disparate Free
Software projects that usually wouldn’t interact can meet up in one
place. I attended this year’s Soc Mentor Summit and hope to attend
again next year.

I’m really going to be urging all Conservancy’s projects to take
advantage of the SoC program in 2011. The level of funding given out by
Google for this program is higher than any other open-application
funding program for
FLOSS.
While Google’s selfish motives are clear (the program presumably helps
them recruit young programmers to hire), the benefit to Free Software
community of the program can nevertheless not be ignored.

GPL Enforcement

GPL Enforcement,
primarily for our BusyBox member
project, remains an active focus of Conservancy. Work regarding the
lawsuit continues. It’s been more than a year since Conservancy filed a
lawsuit against fourteen defendants who manufacture embedded devices
that included BusyBox without source nor an offer for source. Some of
those have come into compliance with the GPL and settled, but a number
remain and are out of compliance; our litigation efforts continue.
Usually, our lawyers encourage us not to comment on ongoing litigation,
but we did put up
a news
item in August when the Court granted Conservancy a default judgment
against one of the defendants, Westinghouse
.

Meanwhile, in the coming year, Conservancy hopes to expand efforts to
enforce the GPL. New violation reports on BusyBox arrive almost daily
that need attention.

More Frequent Blogging

As noted at the start of this post, my hope is to update Conservancy’s
blog more regularly with information about our activities.

This blog post was covered on
LWN
and
on lxnews.org.

SPAM дарение

Post Syndicated from RealEnder original http://alex.stanev.org/blog/?p=267

По-миналата седмица попаднах на статия в Капитал как SPAM регистърът на КЗП получил дарение. Определено се зарадвах, че са направили нещо, което да опростява справките в регистъра и следователно да го направи по-ефективен. Когато погледнах отблизо зъбите на харизания кон обаче, ми стана ясно, че стъпката е отново в страни. Технологичното решение, което колегите са предложили е .NET desktop приложение, към което се сваля криптиран файл със съдържанието на регистъра. Проверката става, като потребителя изготви файл с електронните адреси на получателите, а приложението го филтрира в съответствие със съдържанието на регистъра.
От това веднага трябва да е станало ясно, че ключа за декриптиране вероятно е забит в самото приложение. След прилагането на не-чак-вуду-техники (strings and friends), нямащи нищо общо с reverse engineering, става ясно, че регистърът е криптиран с DES-CBC, а 20 минути по-късно декриптирането се свежда до една команда с OpenSSL:
[email protected]:~$ openssl des-cbc -d -K [find_it_youself] -iv 00f00ab00bc00cf0 -in 29112010.TXT -out spam.txt
Разбира се, изпратих поща на КЗП за проблема, описвайки и възможността недоброжелател да декриптира регистъра и да го постне по руските и китайските спам форуми. След това резултатът е ясен, а действието ефективно ще подкопае и без това крехките устои на SPAM регистъра.
Отговор вече повече от седмица няма.

Handy binaries for Thecus NAS boxes

Post Syndicated from Laurie Denness original https://laur.ie/blog/2010/11/handy-binaries-for-thecus-nas-boxes/

I recently took delivery of the rather splendid Thecus N5500 which I love; it’s the perfect mix between “it just works” and “oh, let’s stick SSH on there and poke around”. With 5 hot swap disk shelves, and 2TB hard drives you’ve got a serious amount of storage.

For your money you get a very nice little piece of hardware in a pretty nice shell (it strikes me as a touch tacky in places but then again it’s hardly going on show) with software that gets the job done. NFS, AFP, Samba, iSCSI, iTunes DAAP support, and plenty of modules to tickle your fancy (Logitech Squeezecenter, for instance).

But who am I kidding, I’m a sysadmin. 10 minutes after powering the thing on I was dying to log in using SSH so I could watch /proc/mdstat to see the RAID build. Luckily, the modules from the Thecus N5200 work fine; which means you’re a couple of clicks away from a root terminal.

  1. Grab the SSH and SYSUSER N5200 modules, and unzip them (a mistake I made.. How embarrassing.)
  2. Upload them using the webinterface, and enable them.
  3. SSH to the NAS box using the user “sys” and the password “sys”
  4. Enjoy your shell, and remember to run `passwd sys` to change the password to something else.

Now, you’ve got yourself a pretty handy, albeit it BusyBox-ridden install of Linux. The whole point of this post, is so I can pimp a few statically compiled binaries that might come in useful to you; they did to me anyway.

(You may wish to install the UTILITIES module, which gives you a proper version of top and ps, amongst other things, available here)

You can simply untar and drop the binaries into /raid/data/modules/bin folder so that they’re in your path, and stored on your disks rather than the flash units which are rather limited in space. By the way, these modules should also work fine on the Thecus N5200 NAS boxes too.

The binaries are available here: http://denness.net/thecus/binaries/

The list includes (all the latest versions as of the date of this blog post):

  • ethtool, handy for network interface prodding
  • iftop, a very useful “GUI” app that shows incoming/outgoing network bandwidth (let’s face it, this is fun on a NAS. NOTE: you may need to execute this one using `TERM=vt100; iftop`)
  • iostat, for hard core disk stats porn. Run it with `iostat -mx 1` and watch the megabytes fly
  • rsync, particularly handy if you want to synchronise/backup data from one place to another, so particularly handy on a NAS.
  • vim, just in case you were planning on writing a lot of code on the Thecus 🙂
  • GNU screen, a nice place to store your terminals and detach and come back later. (NOTE: you may need to execute this one using `TERM=vt100; screen`)
  • The command line version of PHP, in case you were planning on writing any scripts in PHP to run on the Thecus.

Any suggestions/comments, let me know.

MonitorControls – Utilties for monitor management on Windows

Post Syndicated from Laurie Denness original https://laur.ie/blog/2010/11/monitorcontrols-utilties-for-monitor-management-on-windows/

When I ended up using Windows to power the overhead information screens at Last.fm, I lost the ability to have a one line crontab entry that shut the monitors into DPMS standby (and wake them up) when we’re in and out of office hours. Makes no sense wasting power, but more importantly shortening the length of screens having them on when the office is empty.

I didn’t think I would have any issue finding a utility to place the screens in to standby mode. I didn’t; but unfortunately they were either not free, massively complicated or simply didn’t work.

So I found a code snippet online, fired up a copy of Visual Studio and compiled two exe files; MonitorOn.exe and MonitorOff.exe. MonitorOff sends a signal to all attached monitors on the system to go in to sleep mode, and if you move the mouse you can wake them up as normal. Or you can run MonitorOn which will send the signal manually. Simply place these into the Windows Task Scheduler, and you have a simple, effective way to manage your information screens.

You can download MonitorOn and MonitorOff here.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close