Tag Archives: fuzzing

Fuzzilli – JavaScript Engine Fuzzing Library

Post Syndicated from Darknet original https://www.darknet.org.uk/2020/10/fuzzilli-javascript-engine-fuzzing-library/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Fuzzilli – JavaScript Engine Fuzzing Library

Fuzzilii is a JavaScript engine fuzzing library, it’s a coverage-guided fuzzer for dynamic language interpreters based on a custom intermediate language (“FuzzIL”) which can be mutated and translated to JavaScript.

When fuzzing for core interpreter bugs, e.g. in JIT compilers, semantic correctness of generated programs becomes a concern. This is in contrast to most other scenarios, e.g. fuzzing of runtime APIs, in which case semantic correctness can easily be worked around by wrapping the generated code in try-catch constructs.

Read the rest of Fuzzilli – JavaScript Engine Fuzzing Library now! Only available at Darknet.

[$] A survey of some free fuzzing tools

Post Syndicated from jake original https://lwn.net/Articles/744269/rss

Many techniques in software security are complicated and require a deep
understanding of the internal workings of the computer and the software under
test. Some techniques, though, are conceptually simple and do not rely on
knowledge of the underlying software. Fuzzing is a useful example: running a
program with a wide variety of junk input and seeing if it does anything
abnormal or interesting, like crashing. Though it might seem unsophisticated,
fuzzing is extremely helpful in finding the parsing and input processing
problems that are often the beginning of a security vulnerability.

[$] Continuous-integration testing for Intel graphics

Post Syndicated from jake original https://lwn.net/Articles/735468/rss

Two separate talks, at two different venues, give us a look into the
kinds of testing that the Intel graphics team is
doing. Daniel Vetter had a
short presentation as part of the Testing and Fuzzing microconference at
the Linux Plumbers Conference (LPC). His colleague, Martin Peres, gave a
somewhat longer talk, complete with demos, at the X.Org Developers Conference
(XDC). The picture they paint is a pleasing one: there is lots of testing
going on there. But there are problems as well; that amount of testing
runs afoul of bugs elsewhere in the kernel, which makes the job

[$] More from the testing and fuzzing microconference

Post Syndicated from jake original https://lwn.net/Articles/735034/rss

A lot was discussed and presented in the three hours allotted to the Testing
and Fuzzing microconference
at this year’s Linux Plumbers Conference
(LPC), but some spilled out of that slot. We have already looked at some
discussions on kernel testing that occurred both before and during the
microconference. Much of the rest of the discussion is summarized in the
article from this week’s edition, which subscribers can access from the
link below.

[$] Testing kernels

Post Syndicated from jake original https://lwn.net/Articles/734016/rss

New kernels are released regularly, but it is not entirely
clear how much in-depth testing they are actually getting. Even the
mainline kernel may not be getting enough of the right kind of testing. That was the
topic for a “birds of a feather” (BoF) meeting at this year’s Linux Plumbers
(LPC) held in mid-September in Los Angeles, CA.
Dhaval Giani and Sasha Levin organized the BoF as a prelude to the Testing
and Fuzzing microconference
they were leading the next day.

Vranken: The OpenVPN post-audit bug bonanza

Post Syndicated from corbet original https://lwn.net/Articles/726157/rss

Guido Vranken describes
his efforts
to fuzz-test OpenVPN and the bug reports that resulted.
Most of this issues were found through fuzzing. I hate admitting it,
but my chops in the arcane art of reviewing code manually, acquired through
grueling practice, are dwarfed by the fuzzer in one fell swoop; the
mortal’s mind can only retain and comprehend so much information at a time,
and for programs that perform long cycles of complex, deeply nested
operations it is simply not feasible to expect a human to perform an
encompassing and reliable verification.

LibreOffice leverages Google’s OSS-Fuzz to improve quality of office suite

Post Syndicated from ris original https://lwn.net/Articles/723566/rss

The Document Foundation looks at the progress made in improving the quality
and reliability of LibreOffice’s source code by using Google’s OSS-Fuzz.
Developers have used the continuous and
automated fuzzing process, which often catches issues just hours after they
appear in the upstream code repository, to solve bugs – and potential
security issues – before the next binary release.

LibreOffice is the first free office suite in the marketplace to leverage
Google’s OSS-Fuzz. The service, which is associated with other source code
scanning tools such as Coverity, has been integrated into LibreOffice’s
security processes – under Red Hat’s leadership – to significantly improve
the quality of the source code.”

AFL experiments, or please eat your brötli

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/04/afl-experiments-or-please-eat-your.html

When messing around with AFL, you sometimes stumble upon something unexpected or amusing. Say,
having the fuzzer spontaneously synthesize JPEG files,
come up with non-trivial XML syntax,
or discover SQL semantics.

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a
bunch of fairly rudimentary flaws in common bignum libraries,
with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in
IJG jpeg and other widely-used image processing libraries, leaking
data across web origins.

In one of my recent experiments, I decided to fuzz
brotli, an innovative compression library used in Chrome. But since it’s been
already fuzzed for many CPU-years, I wanted to do it with a twist:
stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful
target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to
accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth
poking with a stick every now and then.

In this case, the library held up admirably – spare for a handful of computationally intensive plaintext inputs
(that are now easy to spot due to the recent improvements to AFL).
But the output corpus synthesized by AFL, after being seeded just with a single file containing just “0”, featured quite a few peculiar finds:

  • Strings that looked like viable bits of HTML or XML:

  • Non-trivial numerical constants:
    0,000 0,000 0,0000 0x600,
    0000,$000: 0000,$000:00000000000000.

  • Nonsensical but undeniably English sentences:
    them with them m with them with themselves,
    in the fix the in the pin th in the tin,
    amassize the the in the in the [email protected] in,
    he the themes where there the where there,
    size at size at the tie.

  • Bogus but semi-legible URLs:
    CcCdc.com/.com/m/ /00.com/.com/m/ /00(0(000000CcCdc.com/.com/.com

  • Snippets of Lisp code:

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords – and then put them together in various semi-interesting ways. Not bad.