Branding GNU Mailman Headers & Footers

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/11/08/mailman.html

As always, when something takes me a while to figure out, I try to post
the generally useful technical information on my blog. For the
new copyleft.org site, I’ve been trying
to get all the pages branded properly with the header/footer. This was
straightforward for ikiwiki (which hosts the main site), but I spent an
hour searching around this morning for how to brand the GNU Mailman
instance
on lists.copyleft.org.

Ultimately, here’s what I had to do to get
everything branded, and I’m still not completely sure I found every spot.
It seems that if someone wanted to make a useful patch to GNU Mailman, you
could offer up a change that unifies the HTML templating and branding. In
the meantime, at least for GNU Mailman 2.1.15 as found in Debian 7
(wheezy), here’s what you have to do:

First, some of the branding details are handled in the Python code itself,
so my first action was:

            # cd /var/lib/mailman/Mailman
            # cp -pa htmlformat.py /etc/mailman
            # ln -sf /etc/mailman/htmlformat.py htmlformat.py
          

I did this because htmlformat.py is not a file that the Debian
package install for Mailman puts in /etc/mailman, and I wanted
to keep track
with etckeeper that I was
modifying that file.

The primary modifications that I made to that file were in the
MailmanLogo() method, to which I added a custom footer, and
to Document.Format() method, to which I added a custom
header (at least when not self.suppress_head).
The suppress_head thing was a red flag that told me it was
likely not enough merely to change these methods to get a custom header
and footer on every page. I was right. Ultimately, I had to also change
nearly all the HTML files in /etc/mailman/en/, each of which
needed different changes based on what files they were, and there was no
clear guideline. I guess I could have
added <MM-Mailman-Footer> to every file that had
a </BODY> but didn’t have that yet to get my footer
everywhere, but in the end, I custom-hacked the whole thing.

My full
patches that I applied to all the mailman files is available on
copyleft.org
, in case you want to see how I did it.

Branding GNU Mailman Headers & Footers

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/11/08/mailman.html

As always, when something takes me a while to figure out, I try to post
the generally useful technical information on my blog. For the
new copyleft.org site, I’ve been trying
to get all the pages branded properly with the header/footer. This was
straightforward for ikiwiki (which hosts the main site), but I spent an
hour searching around this morning for how to brand the GNU Mailman
instance
on lists.copyleft.org.

Ultimately, here’s what I had to do to get
everything branded, and I’m still not completely sure I found every spot.
It seems that if someone wanted to make a useful patch to GNU Mailman, you
could offer up a change that unifies the HTML templating and branding. In
the meantime, at least for GNU Mailman 2.1.15 as found in Debian 7
(wheezy), here’s what you have to do:

First, some of the branding details are handled in the Python code itself,
so my first action was:

            # cd /var/lib/mailman/Mailman
            # cp -pa htmlformat.py /etc/mailman
            # ln -sf /etc/mailman/htmlformat.py htmlformat.py
          

I did this because htmlformat.py is not a file that the Debian
package install for Mailman puts in /etc/mailman, and I wanted
to keep track
with etckeeper that I was
modifying that file.

The primary modifications that I made to that file were in the
MailmanLogo() method, to which I added a custom footer, and
to Document.Format() method, to which I added a custom
header (at least when not self.suppress_head).
The suppress_head thing was a red flag that told me it was
likely not enough merely to change these methods to get a custom header
and footer on every page. I was right. Ultimately, I had to also change
nearly all the HTML files in /etc/mailman/en/, each of which
needed different changes based on what files they were, and there was no
clear guideline. I guess I could have
added <MM-Mailman-Footer> to every file that had
a </BODY> but didn’t have that yet to get my footer
everywhere, but in the end, I custom-hacked the whole thing.

My full
patches that I applied to all the mailman files is available on
copyleft.org
, in case you want to see how I did it.

Branding GNU Mailman Headers & Footers

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/11/08/mailman.html

As always, when something takes me a while to figure out, I try to post
the generally useful technical information on my blog. For the
new copyleft.org site, I’ve been trying
to get all the pages branded properly with the header/footer. This was
straightforward for ikiwiki (which hosts the main site), but I spent an
hour searching around this morning for how to brand the GNU Mailman
instance
on lists.copyleft.org.

Ultimately, here’s what I had to do to get
everything branded, and I’m still not completely sure I found every spot.
It seems that if someone wanted to make a useful patch to GNU Mailman, you
could offer up a change that unifies the HTML templating and branding. In
the meantime, at least for GNU Mailman 2.1.15 as found in Debian 7
(wheezy), here’s what you have to do:

First, some of the branding details are handled in the Python code itself,
so my first action was:

            # cd /var/lib/mailman/Mailman
            # cp -pa htmlformat.py /etc/mailman
            # ln -sf /etc/mailman/htmlformat.py htmlformat.py
          

I did this because htmlformat.py is not a file that the Debian
package install for Mailman puts in /etc/mailman, and I wanted
to keep track
with etckeeper that I was
modifying that file.

The primary modifications that I made to that file were in the
MailmanLogo() method, to which I added a custom footer, and
to Document.Format() method, to which I added a custom
header (at least when not self.suppress_head).
The suppress_head thing was a red flag that told me it was
likely not enough merely to change these methods to get a custom header
and footer on every page. I was right. Ultimately, I had to also change
nearly all the HTML files in /etc/mailman/en/, each of which
needed different changes based on what files they were, and there was no
clear guideline. I guess I could have
added <MM-Mailman-Footer> to every file that had
a </BODY> but didn’t have that yet to get my footer
everywhere, but in the end, I custom-hacked the whole thing.

My full
patches that I applied to all the mailman files is available on
copyleft.org
, in case you want to see how I did it.

Branding GNU Mailman Headers & Footers

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/11/08/mailman.html

As always, when something takes me a while to figure out, I try to post
the generally useful technical information on my blog. For the
new copyleft.org site, I’ve been trying
to get all the pages branded properly with the header/footer. This was
straightforward for ikiwiki (which hosts the main site), but I spent an
hour searching around this morning for how to brand the GNU Mailman
instance
on lists.copyleft.org.

Ultimately, here’s what I had to do to get
everything branded, and I’m still not completely sure I found every spot.
It seems that if someone wanted to make a useful patch to GNU Mailman, you
could offer up a change that unifies the HTML templating and branding. In
the meantime, at least for GNU Mailman 2.1.15 as found in Debian 7
(wheezy), here’s what you have to do:

First, some of the branding details are handled in the Python code itself,
so my first action was:

            # cd /var/lib/mailman/Mailman
            # cp -pa htmlformat.py /etc/mailman
            # ln -sf /etc/mailman/htmlformat.py htmlformat.py
          

I did this because htmlformat.py is not a file that the Debian
package install for Mailman puts in /etc/mailman, and I wanted
to keep track
with etckeeper that I was
modifying that file.

The primary modifications that I made to that file were in the
MailmanLogo() method, to which I added a custom footer, and
to Document.Format() method, to which I added a custom
header (at least when not self.suppress_head).
The suppress_head thing was a red flag that told me it was
likely not enough merely to change these methods to get a custom header
and footer on every page. I was right. Ultimately, I had to also change
nearly all the HTML files in /etc/mailman/en/, each of which
needed different changes based on what files they were, and there was no
clear guideline. I guess I could have
added <MM-Mailman-Footer> to every file that had
a </BODY> but didn’t have that yet to get my footer
everywhere, but in the end, I custom-hacked the whole thing.

My full
patches that I applied to all the mailman files is available on
copyleft.org
, in case you want to see how I did it.

Pulling JPEGs out of thin air

Post Syndicated from Unknown original https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html

This is an interesting demonstration of the capabilities of afl; I was actually pretty surprised that it worked!

$ mkdir in_dir
$ echo 'hello' >in_dir/hello
$ ./afl-fuzz -i in_dir -o out_dir ./jpeg-9a/djpeg

In essence, I created a text file containing just “hello” and asked the fuzzer to keep feeding it to a program that expects a JPEG image (djpeg is a simple utility bundled with the ubiquitous IJG jpeg image library; libjpeg-turbo should also work). Of course, my input file does not resemble a valid picture, so it gets immediately rejected by the utility:

$ ./djpeg '../out_dir/queue/id:000000,orig:hello'
Not a JPEG file: starts with 0x68 0x65

Such a fuzzing run would be normally completely pointless: there is essentially no chance that a “hello” could be ever turned into a valid JPEG by a traditional, format-agnostic fuzzer, since the probability that dozens of random tweaks would align just right is astronomically low.

Luckily, afl-fuzz can leverage lightweight assembly-level instrumentation to its advantage – and within a millisecond or so, it notices that although setting the first byte to 0xff does not change the externally observable output, it triggers a slightly different internal code path in the tested app. Equipped with this information, it decides to use that test case as a seed for future fuzzing rounds:

$ ./djpeg '../out_dir/queue/id:000001,src:000000,op:int8,pos:0,val:-1,+cov'
Not a JPEG file: starts with 0xff 0x65

When later working with that second-generation test case, the fuzzer almost immediately notices that setting the second byte to 0xd8 does something even more interesting:

$ ./djpeg '../out_dir/queue/id:000004,src:000001,op:havoc,rep:16,+cov'
Premature end of JPEG file
JPEG datastream contains no image

At this point, the fuzzer managed to synthesize the valid file header – and actually realized its significance. Using this output as the seed for the next round of fuzzing, it quickly starts getting deeper and deeper into the woods. Within several hundred generations and several hundred million execve() calls, it figures out more and more of the essential control structures that make a valid JPEG file – SOFs, Huffman tables, quantization tables, SOS markers, and so on:

$ ./djpeg '../out_dir/queue/id:000008,src:000004,op:havoc,rep:2,+cov'
Invalid JPEG file structure: two SOI markers
...
$ ./djpeg '../out_dir/queue/id:001005,src:000262+000979,op:splice,rep:2'
Quantization table 0x0e was not defined
...
$ ./djpeg '../out_dir/queue/id:001282,src:001005+001270,op:splice,rep:2,+cov' >.tmp; ls -l .tmp
-rw-r--r-- 1 lcamtuf lcamtuf 7069 Nov  7 09:29 .tmp

The first image, hit after about six hours on an 8-core system, looks very unassuming: it’s a blank grayscale image, 3 pixels wide and 784 pixels tall. But the moment it is discovered, the fuzzer starts using the image as a seed – rapidly producing a wide array of more interesting pics for every new execution path:

Of course, synthesizing a complete image out of thin air is an extreme example, and not necessarily a very practical one. But more prosaically, fuzzers are meant to stress-test every feature of the targeted program. With instrumented, generational fuzzing, lesser-known features (e.g., progressive, black-and-white, or arithmetic-coded JPEGs) can be discovered and locked onto without requiring a giant, high-quality corpus of diverse test cases to seed the fuzzer with.

The cool part of the libjpeg demo is that it works without any special preparation: there is nothing special about the “hello” string, the fuzzer knows nothing about image parsing, and is not designed or fine-tuned to work with this particular library. There aren’t even any command-line knobs to turn. You can throw afl-fuzz at many other types of parsers with similar results: with bash, it will write valid scripts; with giflib, it will make GIFs; with fileutils, it will create and flag ELF files, Atari 68xxx executables, x86 boot sectors, and UTF-8 with BOM. In almost all cases, the performance impact of instrumentation is minimal, too.

Of course, not all is roses; at its core, afl-fuzz is still a brute-force tool. This makes it simple, fast, and robust, but also means that certain types of atomically executed checks with a large search space may pose an insurmountable obstacle to the fuzzer; a good example of this may be:

if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;

In practical terms, this means that afl-fuzz won’t have as much luck “inventing” PNG files or non-trivial HTML documents from scratch – and will need a starting point better than just “hello”. To consistently deal with code constructs similar to the one shown above, a general-purpose fuzzer would need to understand the operation of the targeted binary on a wholly different level. There is some progress on this in the academia, but frameworks that can pull this off across diverse and complex codebases in a quick, easy, and reliable way are probably still years away.

PS. Several folks asked me about symbolic execution and other inspirations for afl-fuzz; I put together some notes in this doc.

За Русия без любов

Post Syndicated from Longanlon original http://kaka-cuuka.com/3428

Русофилството и русофобството са традиционни хобита у нас и не минава ден без да попадна на разгорещен спор в интернет на тази тема. Русия наистина носи отпечатъка на своето тежко минало и се опитва да просъществува в съвременния свят и е малко тъпо да се окачествява като извор на всичко зло – поради което преди време се изказах за нея с любов. Предвид настроенията у много българи обаче, мисля, е време за няколко думи за Русия без ама никаква любов…

(Чети още…) (843 думи)

MariaDB foundation trademark agreement

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2014/10/mariadb-foundation-trademark-agreement.html


We have now published the trademark agreement between the MariaDB Corporation (formerly SkySQL) and the MariaDB Foundation. This agreement guarantees that MariaDB Foundation has the rights needed to protect the MariaDB server project!

With this protection, I mean to ensure that the MariaDB Foundation in turn ensures that anyone can be part of MariaDB development on equal terms (like with any other open source project).

I have received some emails and read some blog posts from people who are confusing trademarks with the rights and possibilities for community developers to be part of an open source project.

The MariaDB foundation was never created to protect the MariaDB trademark. It was created to ensure that what happened to MySQL would never happen to MariaDB: That people from the community could not be part of driving and developing MySQL on equal terms as other companies.

I have personally never seen a conflict with having one company own the trademark of an open source product, as long as anyone can participate in the development of the product! Having a strong driver for an open source project usually ensures that there are more full-time developers working on a project than would otherwise be possible. This makes the product better and makes it useful for more people. In most cases, people are participating in an open source project because they are using it, not because they directly make money on the project.

This is certainly the case with MySQL and MariaDB, but also with other projects. If the MySQL or the MariaDB trademark would have been fully owned by a foundation from a start, I think that neither project would have been as successful as they are! More about this later.

Some examples of open source projects that have the trademark used or owned by a commercial parent company are WordPress (wordpress.com and WordPress.org) and Mozilla.

Even when it comes to projects like Linux that are developed by many companies, the trademark is not owned by the Linux Foundation.

There has been some concern that MariaDB Corporation has more developers and Maria captains (people with write access to the MariaDB repositories) on the MariaDB project than anyone else. This means that the MariaDB Corporation has more say about the MariaDB roadmap than anyone else.

This is right and actually how things should be; the biggest contributors to a project are usually the ones that drive the project forward.

This doesn’t, however, mean that no one else can join the development of the MariaDB project and be part of driving the road map.

The MariaDB Foundation was created exactly to guarantee this.

It’s the MariaDB Foundation that governs the rules of how the project is developed, under what criteria one can become a Maria captain, the rights of the Maria captains, and how conflicts in the project are resolved.

Those rules are not yet fully defined, as we have had very few conflicts when it comes to accepting patches. The work on these rules have been initiated and I hope that we’ll have nice and equal rules in place soon. In all cases the rules will be what you would expect from an open source project. Any company that wants to ensure that MariaDB will continue to be a free project and wants to be part of defining the rules of the project can join the MariaDB Foundation and be part of this process!

Some of the things that I think went wrong with MySQL and would not have happened if we had created a foundation similar to the MariaDB Foundation for MySQL early on:

  • Claims that companies like Google and Ebay can’t get their patches into MySQL if they don’t pay (this was before MySQL was bought by Sun).
  • Closed source components in MySQL, developed by the company that owns the trademark to MySQL (almost happened to MySQL in Sun and has happened in MySQL Enterprise from Oracle).
  • Not giving community access to the roadmap.
  • Not giving community developers write access to the official repositories of MySQL.
  • Hiding code and critical test cases from the community.
  • No guarantee that a patch will ever be reviewed.

The MariaDB Foundation guarantees that the above things will never happen to MariaDB. In addition, the MariaDB Foundation employs people to perform reviews, provide documentation, and work actively to incorporate external contributions into the MariaDB project.

This doesn’t mean that anyone can push anything into MariaDB. Any changes need to follow project guidelines and need to be reviewed and approved by at least one Maria captain. Also no MariaDB captain can object to the inclusion of a given patch except on technical merits. If things can’t be resolved among the captains and/or the user community, the MariaDB Foundation has the final word.

I claimed earlier that MariaDB would never have been successful if the trademark had been fully owned by a foundation. The reason I can claim this is that we tried to do it this way and it failed! If we would have continued on this route MariaDB would probably be a dead project today!

To be able to understand this, you will need a little background in MariaDB history. The main points are:

  • Some parts of the MariaDB team and I left Sun in February 2009 to work on the Maria storage engine (now renamed to Aria).
  • Oracle started to acquire Sun in April 2009.
  • Monty Program Ab then hired the rest of the MariaDB engineers and started to focus on MariaDB.
  • I was part of founding SkySQL in July 2010, as a home for MySQL support, consultants, trainers, and sales people.
  • The MariaDB Foundation was announced in November 2012.
  • Monty Program Ab and SkySQL Ab joined forces in April 2013.
  • SkySQL Ab renamed itself to MariaDB Corporation in October 2014

During the 4 years before the MariaDB foundation was formed, I had contacted most of the big companies that had MySQL to thank them for their success and to ask them to be part of MariaDB development. The answers were almost all the same:

We are very interested in you succeeding, but we can’t help you with money or resources until we are using MariaDB ourselves. This is only going to happen when you have proved that MariaDB will take over MySQL.”

It didn’t help that most of the companies that used to pay for MySQL support had gotten scared of MySQL being sold to Oracle and had purchased 2-4 year support contracts to protect themselves against sudden price increases in MySQL support.

In May 2012, after 4 years and spending close to 4 million Euros of my own money, to make MariaDB possible, I realized that something would have to change.

I contacted some of the big technology companies in Silicon Valley and asked if they would be interested in being part of creating a MariaDB Foundation, where they could play bigger roles. The idea was that all the MariaDB developers from Monty Program Ab, the MariaDB trademark and other resources would move to the foundation. For this to happen, I need guarantees that the foundation would have resources to pay salaries to the MariaDB developers for at least the next 5 years.

In the end two companies showed interest in doing this, but after months of discussions they both said that “now was not yet the right time to do this”.

In the end I created the MariaDB Foundation with a smaller role, just to protect the MariaDB server, and got some great companies to support our work:

  • Booking.com
  • SkySQL (2 years!)
  • Parallels (2 years!)
  • Automattic
  • Zenimax

There was also some smaller donations from a variety of companies.

See the whole list at https://mariadb.org/en/supporters.

During this time, SkySQL had become the biggest supporter of MariaDB and also the biggest customer of Monty Program Ab. SkySQL provided front line support for MySQL and MariaDB and Monty Program Ab did the “level 3” support (bug fixes and enhancements for MariaDB).

In the end there were only two ways to go forward to secure the financing of the MariaDB project:

a) Get investors for Monty Program Ab
b) Sell Monty Program Ab.

Note that neither of the above options would have been possible if Monty Program Ab had not owned the MariaDB trademark!

Selling to SkySQL was in the end the right and logical thing to do:

  • They have good investors who are committed to SkySQL and MariaDB.
  • Most of the people in the two companies already know each other as most come from the old MySQL team.
  • The MariaDB trademark was much more known than SkySQL and by owning it would make it much easier for SkySQL to expand their business.
  • As SkySQL was the biggest supporter of the MariaDB project this felt like the right thing to do.

However, to ensure the future of the MariaDB project, SkySQL and Monty Program Ab both agreed that the MariaDB Foundation was critically needed and we had to put a formal trademark agreement in place. Until now there was just a verbal promise for the MariaDB trademarks to the foundation and we had to do this legally right.

This took, because of a lot of reasons too boring to bring up here, much longer time than expected. You can find the trademark agreement publicly available here.

However, now this is finally done and I am happy to say that the future of MariaDB, as an open source project, is protected and there will never again be a reason for me to fork it!

So feel free to join the MariaDB project, either as a developer or community contributor or as a member of the MariaDB Foundation!

PSA: don’t run ‘strings’ on untrusted files (CVE-2014-8485)

Post Syndicated from Unknown original https://lcamtuf.blogspot.com/2014/10/psa-dont-run-strings-on-untrusted-files.html

Many shell users, and certainly most of the people working in computer forensics or other fields of information security, have a habit of running /usr/bin/strings on binary files originating from the Internet. Their understanding is that the tool simply scans the file for runs of printable characters and dumps them to stdout – something that is very unlikely to put you at any risk.

It is much less known that the Linux version of strings is an integral part of GNU binutils, a suite of tools that specializes in the manipulation of several dozen executable formats using a bundled library called libbfd. Other well-known utilities in that suite include objdump and readelf.

Perhaps simply by the virtue of being a part of that bundle, the strings utility tries to leverage the common libbfd infrastructure to detect supported executable formats and “optimize” the process by extracting text only from specific sections of the file. Unfortunately, the underlying library can be hardly described as safe: a quick pass with afl (and probably with any other competent fuzzer) quickly reveals a range of troubling and likely exploitable out-of-bounds crashes due to very limited range checking, say:

$ wget http://lcamtuf.coredump.cx/strings-bfd-badptr2
...
$ strings strings-bfd-badptr2
Segmentation fault
...
strings[24479]: segfault at 4141416d ip 0807a4e7 sp bf80ca60 error 4 in strings[8048000+9a000]
...
      while (--n_elt != 0)
        if ((++idx)->shdr->bfd_section)                                ← Read from an attacker-controlled pointer
          elf_sec_group (idx->shdr->bfd_section) = shdr->bfd_section;  ← Write to an attacker-controlled pointer
...
(gdb) p idx->shdr
$1 = (Elf_Internal_Shdr *) 0x41414141

The 0x41414141 pointer being read and written by the code comes directly from that proof-of-concept file and can be freely modified by the attacker to try overwriting program control structures. Many Linux distributions ship strings without ASLR, making potential attacks easier and more reliable – a situation reminiscent of one of the recent bugs in bash.

Interestingly, the problems with the utility aren’t exactly new; Tavis spotted the first signs of trouble some nine years ago.

In any case: the bottom line is that if you are used to running strings on random files, or depend on any libbfd-based tools for forensic purposes, you should probably change your habits. For strings specifically, invoking it with the -a parameter seems to inhibit the use of libbfd. Distro vendors may want to consider making the -a mode default, too.



PS. I actually had the libbfd fuzzing job running on this thing!

My perl-cwmp patches are merged

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/10/my-perl-cwmp-patches-are-merged.html

Hello,
I’ve used perl-cwmp here and there. It is a nice, really small, really light and simple TR-069 ACS, with a very easy install and no heavy requirements. You can read the whole code for few minutes and you can make your own modifications. I am using it in a lot of small “special” cases, where you need something fast and specific, or a very complex workflow that cannot be implemented by any other ACS server.
However, this project has been stalled for a while. I’ve found that a lot of modern TR-069/CWMP agents do not work well with the perl-cwmp. 
There are quite of few reasons behind those problems:
– Some of the agents are very strict – they expect the SOAP message to be formatted in a specific way, not the way perl-cwmp does it
– Some of the agents are compiled with not so smart, static expansion of the CWMP xsd file. That means they do expect string type spec in the SOAP message and strict ordering
perl-cwmp do not “compile” the CWMP XSD and do not send strict requests nor interpretate the responses strictly. It does not automatically set the correct property type in the request according to the spec, because it never reads the spec. It always assume that the property type is a string.
To allow perl-cwmp to be fixed and adjusted to work with those type of TR-069 agents I’ve done few modifications to the code, and I am happy to announce they have been accepted and merged to the main code:
The first modification is that I’ve updated (according to the current standard) the SOAP header. It was incorrectly set and many TR069 devices I have tested (and basically all that worked with the Broadcom TR069 client) rejected the request.
The second modification is that all the properties now may have specified type. Unless you specify the type it is always assumed to be a string. That will allow the ACS to set property value of agents that do a strict set check.
InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#60
The #…# specifies the type of the property. In the example above, we are setting value of unsignedInt 60 to PeriodicInformInterval.
You can also set value to a property by reading a value from another property.
For that you can use ${ property name }
Here is an example how to set the PPP password to be the value of the Serial Number:
InternetGatewayDevice.WANDevice.1.WANConnectionDevice.1.WANPPPConnection.1.Password: ${InternetGatewayDevice.DeviceInfo.SerialNumber}
And last but not least – now you can execute small code, or external script and set the value of a property to the output of that code. You can do that with $[ code ]
Here is an example how to set a random value to the PeriodicInformInterval:

InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[60 + int(rand(100))]

Here is another example, how to execute external script that could take this decision:
InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[ `./externalscript.sh ${InternetGatewayDevice.LANDevice.1.LANEthernetInterfaceConfig.1.MACAddress} ${InternetGatewayDevice.DeviceInfo.SerialNumber}` ]
The last modification I’ve done is to allow the perl-cwmp to “fork” a new process when a TR-069 request arrives. It has been single threaded code, which mean the agents has to wait until the previous task is completed. However, if the TCP listening queue is full, or the ACS very busy, some of the agents will assume there is no response and timeout. You may have to wait for 24h (the default periodic interval for some vendors) until you get your next request. Now that can be avoided.
All this is very valuable for dynamic and automated configurations without the need of modification of the core code, just modifying the configuration file.

Two more browser memory disclosure bugs (CVE-2014-1580 and #19611cz)

Post Syndicated from Unknown original https://lcamtuf.blogspot.com/2014/10/two-more-browser-memory-disclosure-bugs.html

To add several more trophies to afl‘s pile of image parsing memory disclosure vulnerabilities:

  • MSFA 2014-78 (CVE-2014-1580) fixes another case of uninitialized memory disclosure in Firefox – this time, when rendering truncated GIF images on <canvas>. The bug was reported on September 5 and fixed today. For a convenient test case, check out this page. Rough timeline:

    • September 5: Initial, admittedly brief notification to vendor, including a simple PoC.
    • September 5: Michael Wu confirms the exposure and pinpoints the root cause. Discussion of fixes ensues.
    • September 9: Initial patch created.
    • September 12: Patch approved and landed.
    • October 2: Patch verified by QA.
    • October 13: Fixes ship with Firefox 33.

  • MSRC case #19611cz (MS14-085) is a conceptually similar bug related to JPEG DHT parsing, seemingly leaking bits of stack information in Internet Explorer. This was reported to MSRC on July 2 and hasn’t been fixed to date. Test case here. Rough timeline:

    • July 2: Initial, admittedly brief notification to vendor, mentioning the disclosure of uninitialized memory and including a simple PoC.
    • July 3: MSRC request to provide “steps and necessary files to reproduce”.
    • July 3: My response, pointing back to the original test case.
    • July 3: MSRC response, stating that they are “unable to determine the nature of what I am reporting”.
    • July 3: My response, reiterating the suspected exposure in a more verbose way.
    • July 4: MSRC response from an analyst, confirming that they could reproduce, but also wondering if “his webserver is not loading up a different jpeg just to troll us”.
    • July 4: My response stating that I’m not trolling MSRC.
    • July 4: MSRC opens case #19611cz.
    • July 29: MSRC response stating that they are “unable identify a way in which an attacker would be able to propagate the leaked stack data back to themselves”.
    • July 29: My response pointing the existence of the canvas.toDataURL() API in Internet Explorer, and providing a new PoC that demonstrates the ability to read back data.
    • September 24: A notification from MSRC stating that the case has been transferred to a new case manager.
    • October 7: My response noting that we’ve crossed the 90-day mark with no apparent progress made, and that I plan to disclose the bug within a week.
    • October 9: Acknowledgment from MSRC.

Well, that’s it. Enjoy!

Fuzzing random programs without execve()

Post Syndicated from Unknown original https://lcamtuf.blogspot.com/2014/10/fuzzing-binaries-without-execve.html

The most common way to fuzz data parsing libraries is to find a simple binary that exercises the interesting functionality, and then simply keep executing it over and over again – of course, with slightly different, randomly mutated inputs in each run. In such a setup, testing for evident memory corruption bugs in the library can be as simple as doing waitpid() on the child process and checking if it ever dies with SIGSEGV, SIGABRT, or something equivalent.

This approach is favored by security researchers for two reasons. Firstly, it eliminates the need to dig into the documentation, understand the API offered by the underlying library, and then write custom code to stress-test the parser in a more direct way. Secondly, it makes the fuzzing process repeatable and robust: the program is running in a separate process and is restarted with every input file, so you do not have to worry about a random memory corruption bug in the library clobbering the state of the fuzzer itself, or having weird side effects on subsequent runs of the tested tool.

Unfortunately, there is also a problem: especially for simple libraries, you may end up spending most of the time waiting for execve(), the linker, and all the library initialization routines to do their job. I’ve been thinking of ways to minimize this overhead in american fuzzy lop, but most of the ideas I had were annoyingly complicated. For example, it is possible to write a custom ELF loader and execute the program in-process while using mprotect() to temporarily lock down the memory used by the fuzzer itself – but things such as signal handling would be a mess. Another option would be to execute in a single child process, make a snapshot of the child’s process memory and then “rewind” to that image later on via /proc/pid/mem – but likewise, dealing with signals or file descriptors would require a ton of fragile hacks.

Luckily, Jann Horn figured a different, much simpler approach, and sent me a patch for afl out of the blue 🙂 It boils down to injecting a small piece of code into the fuzzed binary – a feat that can be achieved via LD_PRELOAD, via PTRACE_POKETEXT, via compile-time instrumentation, or simply by rewriting the ELF binary ahead of the time. The purpose of the injected shim is to let execve() happen, get past the linker (ideally with LD_BIND_NOW=1, so that all the hard work is done beforehand), and then stop early on in the actual program, before it gets to processing any inputs generated by the fuzzer or doing anything else of interest. In fact, in the simplest variant, we can simply stop at main().

Once the designated point in the program is reached, our shim simply waits for commands from the fuzzer; when it receives a “go” message, it calls fork() to create an identical clone of the already-loaded program; thanks to the powers of copy-on-write, the clone is created very quickly yet enjoys a robust level of isolation from its older twin. Within the child process, the injected code returns control to the original binary, letting it process the fuzzer-supplied input data (and suffer any consequences of doing so). Within the parent, the shim relays the PID of the newly-crated process to the fuzzer and goes back to the command-wait loop.

Of course, when you start dealing with process semantics on Unix, nothing is as easy as it appears at first sight; here are some of the gotchas we had to work around in the code:

  • File descriptor offsets are shared between processes created with fork(). This means that any descriptors that are open at the time that our shim is executed may need to be rewound to their original position; not a significant concern if we are stopping at main() – we can just as well rewind stdin by doing lseek() in the fuzzer itself, since that’s where the descriptor originates – but it can become a hurdle if we ever aim at locations further down the line.

  • In the same vein, there are some types of file descriptors we can’t fix up. The shim needs to be executed before any access to pipes, character devices, sockets, and similar non-resettable I/O. Again, not a big concern for main().

  • The task of duplicating threads is more complicated and would require the shim to keep track of them all. So, in simple implementations, the shim needs to be injected before any additional threads are spawned in the binary. (Of course, threads are rare in file parser libraries, but may be more common in more heavyweight tools.)

  • The fuzzer is no longer an immediate parent of the fuzzed process, and as a grandparent, it can’t directly use waitpid(); there is also no other simple, portable API to get notified about the process’ exit status. We fix that simply by having the shim do the waiting, then send the status code to the fuzzer. In theory, we should simply call the clone() syscall with the CLONE_PARENT flag, which would make the new process “inherit” the original PPID. Unfortunately, calling the syscall directly confuses glibc, because the library caches the result of getpid() when initializing – and without a way to make it reconsider, PID-dependent calls such as abort() or raise() will go astray. There is also a library wrapper for the clone() call that does update the cached PID – but the wrapper is unwieldy and insists on messing with the process’ stack.

    (To be fair, PTRACE_ATTACH offers a way to temporarily adopt a process and be notified of its exit status, but it also changes process semantics in a couple of ways that need a fair amount of code to fully undo.)

Even with the gotchas taken into account, the shim isn’t complicated and has very few moving parts – a welcome relief compared to the solutions I had in mind earlier on. It reads commands via a pipe at file descriptor 198, uses fd 199 to send messages back to parent, and does just the bare minimum to get things sorted out. A slightly abridged verion of the code is:

__afl_forkserver:

  /* Phone home and tell the parent that we're OK. */

  pushl $4          /* length    */
  pushl $__afl_temp /* data      */
  pushl $199        /* file desc */
  call  write
  addl  $12, %esp

__afl_fork_wait_loop:

  /* Wait for parent by reading from the pipe. This will block until
     the parent sends us something. Abort if read fails. */

  pushl $4          /* length    */
  pushl $__afl_temp /* data      */
  pushl $198        /* file desc */
  call  read
  addl  $12, %esp

  cmpl  $4, %eax
  jne   __afl_die

  /* Once woken up, create a clone of our process. */

  call fork

  cmpl $0, %eax
  jl   __afl_die
  je   __afl_fork_resume

  /* In parent process: write PID to pipe, then wait for child. 
     Parent will handle timeouts and SIGKILL the child as needed. */

  movl  %eax, __afl_fork_pid

  pushl $4              /* length    */
  pushl $__afl_fork_pid /* data      */
  pushl $199            /* file desc */
  call  write
  addl  $12, %esp

  pushl $2             /* WUNTRACED */
  pushl $__afl_temp    /* status    */
  pushl __afl_fork_pid /* PID       */
  call  waitpid
  addl  $12, %esp

  cmpl  $0, %eax
  jle   __afl_die

  /* Relay wait status to pipe, then loop back. */

  pushl $4          /* length    */
  pushl $__afl_temp /* data      */
  pushl $199        /* file desc */
  call  write
  addl  $12, %esp

  jmp __afl_fork_wait_loop

__afl_fork_resume:

  /* In child process: close fds, resume execution. */

  pushl $198
  call  close

  pushl $199
  call  close

  addl  $8, %esp
  ret

But, was it worth it? The answer is a resounding “yes”: the stop-at-main() logic, already shipping with afl 0.36b, can speed up the fuzzing of many common image libraries by a factor of two or more. It’s actually almost unexpected, given that we still keep doing fork(), a syscall with a lingering reputation for being very slow.

The next challenge is devising a way to move the shim down the stream, so that we can also skip any common program initialization steps, such as reading config files – and stop just few instructions shy of the point where the application tries to read the mutated data we are messing with. Jann’s original patch has a solution that relies on ptrace() to detect file access; but we’ve been brainstorming several other ways.

PS. On a related note, some readers might enjoy this.

Always Follow the Money

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/10/10/anita-borg.html

Selena Larson wrote an article
describing the Male Allies Plenary Panel at the Anita Borg
Institute’s Grace Hopper Celebration on Wednesday night
. There is a
video available of the
panel
(that’s the youtube link, the links on Anita Borg Institute’s
website don’t work with Free Software).

Selena’s article pretty much covers it. The only point that I thought
useful to add was that one can “follow the money” here.
Interestingly
enough, Facebook,
Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event
.
I find it a strange correlation that not one man on this panel is from a
company that didn’t sponsor the event. Are there no male allies
to the cause of women in tech worth hearing from who work for companies that, say,
don’t have enough money to sponsor the event? Perhaps that’s true, but
it’s somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption
of campaign funds is that those who donate simply have more access and time
to make their case to the congressional representatives. They aren’t
buying votes; they’re buying access for conversations. (This was covered
well
in This
American Life
, Episode 461
).

I often see a similar problem in the “Open Source” world. The
loudest microphones can be bought by the highest bidder (in various ways),
so we hear more from the wealthiest companies. The amazing thing about
this story, frankly, is that buying the microphone didn’t work
this time. I’m very glad the audience refused to let it happen! I’d love
to see a similar reaction at the corporate-controlled “Open Source and
Linux” conferences!

Update later in the day: The conference I’m commenting on
above is the same conference where Satya Nadella, CEO of Microsoft, said
that women shouldn’t ask for raises, and Microsoft is also a
top-tier sponsor of the conference. I’m left wondering if anyone who spoke
at this conference didn’t pay for the privilege of making these gaffes.

Always Follow the Money

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/10/10/anita-borg.html

Selena Larson wrote an article
describing the Male Allies Plenary Panel at the Anita Borg
Institute’s Grace Hopper Celebration on Wednesday night
. There is a
video available of the
panel
(that’s the youtube link, the links on Anita Borg Institute’s
website don’t work with Free Software).

Selena’s article pretty much covers it. The only point that I thought
useful to add was that one can “follow the money” here.
Interestingly
enough, Facebook,
Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event
.
I find it a strange correlation that not one man on this panel is from a
company that didn’t sponsor the event. Are there no male allies
to the cause of women in tech worth hearing from who work for companies that, say,
don’t have enough money to sponsor the event? Perhaps that’s true, but
it’s somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption
of campaign funds is that those who donate simply have more access and time
to make their case to the congressional representatives. They aren’t
buying votes; they’re buying access for conversations. (This was covered
well
in This
American Life
, Episode 461
).

I often see a similar problem in the “Open Source” world. The
loudest microphones can be bought by the highest bidder (in various ways),
so we hear more from the wealthiest companies. The amazing thing about
this story, frankly, is that buying the microphone didn’t work
this time. I’m very glad the audience refused to let it happen! I’d love
to see a similar reaction at the corporate-controlled “Open Source and
Linux” conferences!

Update later in the day: The conference I’m commenting on
above is the same conference where Satya Nadella, CEO of Microsoft, said
that women shouldn’t ask for raises, and Microsoft is also a
top-tier sponsor of the conference. I’m left wondering if anyone who spoke
at this conference didn’t pay for the privilege of making these gaffes.

Always Follow the Money

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/10/10/anita-borg.html

Selena Larson wrote an article
describing the Male Allies Plenary Panel at the Anita Borg
Institute’s Grace Hopper Celebration on Wednesday night
. There is a
video available of the
panel
(that’s the youtube link, the links on Anita Borg Institute’s
website don’t work with Free Software).

Selena’s article pretty much covers it. The only point that I thought
useful to add was that one can “follow the money” here.
Interestingly
enough, Facebook,
Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event
.
I find it a strange correlation that not one man on this panel is from a
company that didn’t sponsor the event. Are there no male allies
to the cause of women in tech worth hearing from who work for companies that, say,
don’t have enough money to sponsor the event? Perhaps that’s true, but
it’s somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption
of campaign funds is that those who donate simply have more access and time
to make their case to the congressional representatives. They aren’t
buying votes; they’re buying access for conversations. (This was covered
well
in This
American Life
, Episode 461
).

I often see a similar problem in the “Open Source” world. The
loudest microphones can be bought by the highest bidder (in various ways),
so we hear more from the wealthiest companies. The amazing thing about
this story, frankly, is that buying the microphone didn’t work
this time. I’m very glad the audience refused to let it happen! I’d love
to see a similar reaction at the corporate-controlled “Open Source and
Linux” conferences!

Update later in the day: The conference I’m commenting on
above is the same conference where Satya Nadella, CEO of Microsoft, said
that women shouldn’t ask for raises, and Microsoft is also a
top-tier sponsor of the conference. I’m left wondering if anyone who spoke
at this conference didn’t pay for the privilege of making these gaffes.

Always Follow the Money

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/10/10/anita-borg.html

Selena Larson wrote an article
describing the Male Allies Plenary Panel at the Anita Borg
Institute’s Grace Hopper Celebration on Wednesday night
. There is a
video available of the
panel
(that’s the youtube link, the links on Anita Borg Institute’s
website don’t work with Free Software).

Selena’s article pretty much covers it. The only point that I thought
useful to add was that one can “follow the money” here.
Interestingly
enough, Facebook,
Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event
.
I find it a strange correlation that not one man on this panel is from a
company that didn’t sponsor the event. Are there no male allies
to the cause of women in tech worth hearing from who work for companies that, say,
don’t have enough money to sponsor the event? Perhaps that’s true, but
it’s somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption
of campaign funds is that those who donate simply have more access and time
to make their case to the congressional representatives. They aren’t
buying votes; they’re buying access for conversations. (This was covered
well
in This
American Life
, Episode 461
).

I often see a similar problem in the “Open Source” world. The
loudest microphones can be bought by the highest bidder (in various ways),
so we hear more from the wealthiest companies. The amazing thing about
this story, frankly, is that buying the microphone didn’t work
this time. I’m very glad the audience refused to let it happen! I’d love
to see a similar reaction at the corporate-controlled “Open Source and
Linux” conferences!

Update later in the day: The conference I’m commenting on
above is the same conference where Satya Nadella, CEO of Microsoft, said
that women shouldn’t ask for raises, and Microsoft is also a
top-tier sponsor of the conference. I’m left wondering if anyone who spoke
at this conference didn’t pay for the privilege of making these gaffes.

Always Follow the Money

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/10/10/anita-borg.html

Selena Larson wrote an article
describing the Male Allies Plenary Panel at the Anita Borg
Institute’s Grace Hopper Celebration on Wednesday night
. There is a
video available of the
panel
(that’s the youtube link, the links on Anita Borg Institute’s
website don’t work with Free Software).

Selena’s article pretty much covers it. The only point that I thought
useful to add was that one can “follow the money” here.
Interestingly
enough, Facebook,
Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event
.
I find it a strange correlation that not one man on this panel is from a
company that didn’t sponsor the event. Are there no male allies
to the cause of women in tech worth hearing from who work for companies that, say,
don’t have enough money to sponsor the event? Perhaps that’s true, but
it’s somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption
of campaign funds is that those who donate simply have more access and time
to make their case to the congressional representatives. They aren’t
buying votes; they’re buying access for conversations. (This was covered
well
in This
American Life
, Episode 461
).

I often see a similar problem in the “Open Source” world. The
loudest microphones can be bought by the highest bidder (in various ways),
so we hear more from the wealthiest companies. The amazing thing about
this story, frankly, is that buying the microphone didn’t work
this time. I’m very glad the audience refused to let it happen! I’d love
to see a similar reaction at the corporate-controlled “Open Source and
Linux” conferences!

Update later in the day: The conference I’m commenting on
above is the same conference where Satya Nadella, CEO of Microsoft, said
that women shouldn’t ask for raises, and Microsoft is also a
top-tier sponsor of the conference. I’m left wondering if anyone who spoke
at this conference didn’t pay for the privilege of making these gaffes.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close