Socorro and Firefox 57

, | Tweet this

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

Teams at Mozilla are feverishly working on Firefox 57. That's super important work and we're getting down to the wire. Socorro is a critical part of that development work as it collects incoming crashes, processes them, and has tools for analysis.

This blog post covers some of the things Socorro engineering has been doing to facilitate that work and what we're planning from now until Firefox 57 release.

This quarter

This quarter, we replaced Snappy with Tecken for more reliable symbol lookup in Visual Studio and other clients.

We built a Docker-based local dev environment for Socorro making it easier to run Socorro on your local machine configured like crash-stats.mozilla.com. It now takes five steps to getting Socorro running on your computer.

We also overhauled the signature generation system in Socorro and slapped on a command-line interface. Now you can test the effects of signature generation changes on specific crashes as well as groups of crashes on your local machine.

We've also been fixing stability issues and bugs and myriad other things.

Now until Firefox 57

Starting today and continuing until after Firefox 57 release, we are:

  1. prioritizing your signature generation changes, getting them landed, and pushing them to -prod
  2. triaging Socorro bugs into "need it right now" and "everything else" buckets
  3. deferring big changes to Socorro until after Firefox 57 including API endpoint deprecation, major UI changes to the crash-stats interface, and other things that would affect your workflow

We want to make sure crash analysis is working as best as it can so you can do the best you can so we can have a successful Firefox 57.

Please contact us if you need something!

We hang out on #breakpad on irc.mozilla.org. You can also write up bugs.

Hopefully this helps. If not, let us know!

Soloists: code review on a solo project

, | Tweet this

Summary

I work on some projects with other people, but I also spend a lot of time working on projects by myself. When I'm working by myself, I have difficulties with the following:

  1. code review
  2. bouncing ideas off of people
  3. peer programming
  4. long slogs
  5. getting help when I'm stuck
  6. publicizing my work
  7. dealing with loneliness
  8. going on vacation

I started a #soloists group at Mozilla figuring there are a bunch of other Mozillians who are working on solo projects and maybe if we all work alone together, then that might alleviate some of the problems of working solo. We hang out in the #soloists IRC channel on irc.mozilla.org. If you're solo, join us!

I keep thinking about writing a set of blog posts for things we've talked about in the channel and how I do things. Maybe they'll help you.

This one covers code review.

Read more…

Antenna: post-mortem and project wrap-up

, | Tweet this

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

The Socorro collector is one of several components that comprise Socorro. Each of the components has different uptime requirements and different security risk profiles. However, all the code is maintained in a single repository and we deploy everything every time we do a deploy. This is increasingly inflexible and makes it difficult for us to make architectural changes to Socorro without affecting everything and incurring uptime risk for components that have high uptime requirements.

Because of that, in early 2016, we embarked on a rearchitecture to split out some components of Socorro into separate services. The first component to get split out was the Socorro collector since it needs has the highest uptime requirements of all the Socorro components, but rarely changes, so it'd be a lot easier to meet those requirements if it was separate from the rest of Socorro.

Thus I was tasked with splitting out the Socorro collector and this blog post covers that project. It's a bit stream-of-consciousness, because I think there's some merit to explaining the thought process behind how I did the work over the course of the project for other people working on projects.

Read more…

The Soloists

, | Tweet this

Building Firefox is a big endeavor. There are many teams and projects covering initiatives, maintenance, bug fixing, triage, localization, support, understanding feedback, marketing, communication, releasing, supporting infrastructure, crash analysis, and a bazillion other activities all to build a family of browsers and applications.

Teams and projects aren't static. People move around as priorities change and the landscape shifts and projects complete or are scuttled.

Sometimes projects get started up with a single person. Sometimes all the people except one move off a project. Sometimes we find ourselves working alone, in a basement office, with only a stapler equivalent to keep us company.

We are the soloists. You wouldn't believe the list of things we work on. Alone.

Where to find soloists: IRC, Slack

There's an IRC channel #soloists on irc.mozilla.org.

There's also a Slack channel #soloists on the Mozilla Slack [1].

These two places (and whatever other places soloists want to hang out at) are places where we can:

  • find some solace from the weary drudgery of being alone on their projects for days on end
  • ask for help
  • bounce ideas off each other
  • vent frustrations in a friendly forgiving place
  • get advice on dealing with things like code reviews and how to go on vacation
  • get recognition for a job well done

and a variety of other things that alleviate many of the problems we have as soloists.

[1] I just created it, so it's kind of empty. I'm feeling alone in the #soloists Slack channel. So alone.

Stickers at the All Hands!

Over the last month or so, we spent some time figuring out #soloists stickers because we like stickers and you like stickers and everyone likes stickers.

They look like this:

/images/soloist_2017_handdrawn.thumbnail.png

Soloist 2017 sticker.

They're 2" by 2" and round. They're warm to the touch. They make you want to climb things. By yourself. Alone. With appropriate safety gear. [2]

If you're a soloist, come find one of us and get a sticker. Also, consider joining soloist channels.

If you support soloists, come find one of us and get a sticker. Ask us about the things we're working on. We may be solo, but we're working on real projects that almost certainly affect you. As a group, we did great things in the last 6 months. Alone. So alone.

[2] That's how they make me feel, anyhow.

Using Localstack for a fake AWS S3 for local development

, | Tweet this

Summary

Over the last year, I rewrote the Socorro collector which is the edge of the Mozilla crash ingestion pipeline. Antenna (the new collector) receives crashes from Breakpad clients as HTTP POSTs, converts the POST payload into JSON, and then saves a bunch of stuff to AWS S3. One of the problems with this is that it's a pain in the ass to do development on my laptop without being connected to the Internet and using AWS S3.

This post covers the various things I did to have a locally running fake AWS S3 service.

Read more…

Everett v0.9 released and why you should use Everett

, | Tweet this

What is it?

Everett is a Python configuration library.

Configuration with Everett:

  • is composeable and flexible
  • makes it easier to provide helpful error messages for users trying to configure your software
  • supports auto-documentation of configuration with a Sphinx autocomponent directive
  • supports easy testing with configuration override
  • can pull configuration from a variety of specified sources (environment, ini files, dict, write-your-own)
  • supports parsing values (bool, int, lists of things, classes, write-your-own)
  • supports key namespaces
  • supports component architectures
  • works with whatever you're writing--command line tools, web sites, system daemons, etc

Everett is inspired by python-decouple and configman.

v0.9 released!

This release focused on overhauling the Sphinx extension. It now:

  • has an Everett domain
  • supports roles
  • indexes Everett components and options
  • looks a lot better

This was the last big thing I wanted to do before doing a 1.0 release. I consider Everett 0.9 to be a solid beta. Next release will be a 1.0.

Why you should take a look at Everett

At Mozilla, I'm using Everett 0.9 for Antenna which is running in our -stage environment and will go to -prod very soon. Antenna is the edge of the crash ingestion pipeline for Mozilla Firefox.

When writing Antenna, I started out with python-decouple, but I didn't like the way python-decouple dealt with configuration errors (it's pretty hands-off) and I really wanted to automatically generate documentation from my configuration code. Why write the same stuff twice especially where it's a critical part of setting Antenna up and the part everyone will trip over first?

Here's the configuration documentation for Antenna:

http://antenna.readthedocs.io/en/latest/configuration.html#application

Here's the index which makes it easy to find things by component or by option (in this case, environment variables):

http://antenna.readthedocs.io/en/latest/genindex.html

When you configure Antenna incorrectly, it spits out an error message like this:

1  <traceback omitted, but it'd be here>
2  everett.InvalidValueError: ValueError: invalid literal for int() with base 10: 'foo'
3  namespace=None key=statsd_port requires a value parseable by int
4  Port for the statsd server
5  For configuration help, see https://antenna.readthedocs.io/en/latest/configuration.html

So what's here?:

  • Block 1 is the traceback so you can trace the code if you need to.
  • Line 2 is the exception type and message
  • Line 3 tells you the namespace, key, and parser used
  • Line 4 is the documentation for that specific configuration option
  • Line 5 is the "see also" documentation for the component with that configuration option

Is it beautiful? No. [1] But it gives you enough information to know what the problem is and where to go for more information.

Further, in Python 3, Everett will always raise a subclass of ConfigurationError so if you don't like the output, you can tailor it to your project's needs. [2]

First-class docs. First-class configuration error help. First-class testing. This is why I created Everett.

If this sounds useful to you, take it for a spin. It's almost a drop-in replacement for python-decouple [3] and os.environ.get('CONFIGVAR', 'default_value') style of configuration.

Enjoy!

[1] I would love some help here--making that information easier to parse would be great for a 1.0 release.
[2] Python 2 doesn't support exception chaining and I didn't want to stomp on the original exception thrown, so in Python 2, Everett doesn't wrap exceptions.
[3] python-decouple is a great project and does a good job at what it was built to do. I don't mean to demean it in any way. I have additional requirements that python-decouple doesn't do well and that's where I'm coming from.

Where to go for more

For more specifics on this release, see here: http://everett.readthedocs.io/en/latest/history.html#april-7th-2017

Documentation and quickstart here: https://everett.readthedocs.org/en/v0.9/

Source code and issue tracker here: https://github.com/willkg/everett

Bleach v2.0 released!

, | Tweet this

What is it?

Bleach is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML.

Bleach v2.0 released!

Bleach 2.0 is a massive rewrite. Bleach relies on the html5lib library. html5lib 0.99999999 (8 9s) changed the APIs that Bleach was using to sanitize text. As such, in order to support html5lib >= 0.99999999 (8 9s), I needed to rewrite Bleach.

Before embarking on the rewrite, I improved the tests and added a set of tests based on XSS example strings from the OWASP site. Spending quality time with tests before a rewrite or refactor is both illuminating (you get a better understanding of what the requirements are) and also immensely helpful (you know when your rewrite/refactor differs from the original). That was time well spent.

Given that I was doing a rewrite anyways, I decided to take this opportunity to break the Bleach API to make it more flexible and easier to use:

  • added Cleaner and Linkifier classes that you can create once and reuse to reduce redundant work--suggested in #125
  • created BleachSanitizerFilter which is now an html5lib filter that can be used anywhere you can use an html5lib filter
  • created LinkifyFilter as an html5lib filter that can be used anywhere you use an html5lib filter including as part of cleaning allowing you to clean and linkify in one pass--suggested in #46
  • changed arguments for attribute callables and linkify callbacks
  • and so on

During and after the rewrite, I improved the documentation converting all the examples to doctest format so they're testable and verifiable and adding examples where there weren't any. This uncovered bugs in the documentation and pointed out some annoyances with the new API.

As I rewrote and refactored code, I focused on making the code simpler and easier to maintain going forward and also documented the intentions so I and others can know what the code should be doing.

I also adjusted the internals to make it easier for users to extend, subclass, swap out and whatever else to adjust the functionality to meet their needs without making Bleach harder to maintain for me or less safe because of additional complexity.

For API-adjustment inspiration, I went through the Bleach issue tracker and tried to address every possible issue with this update: infinite loops, unintended behavior, inflexible APIs, suggested refactorings, features, bugs, etc.

The rewrite took a while. I tried to be meticulous because this is a security library and it's a complicated problem domain and I was working on my own during slow times on work projects. When working on one's own, you don't have benefit of review. Making sure to have good test coverage and waiting a day to self-review after posting a PR caught a lot of issues. I also go through the PR and add comments explaining why I did things to give context to future me. Those habits help a lot, but probably aren't as good as a code review by someone else.

Some stats

OMG! This blog post is so boring! Walls of text everywhere so far!

There were 61 commits between v1.5 and v2.0:

  • Vadim Kotov: 1
  • Alexandr N. Zamaraev: 2
  • me: 58

I closed out 22 issues--possibly some more.

The rewrite has the following git diff --shortstat:

64 files changed, 2330 insertions(+), 1128 deletions(-)

Lines of code for Bleach 1.5:

~/mozilla/bleach> cloc bleach/ tests/
      11 text files.
      11 unique files.
       0 files ignored.

http://cloc.sourceforge.net v 1.60  T=0.07 s (152.4 files/s, 25287.2 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          11            353            200           1272
-------------------------------------------------------------------------------
SUM:                            11            353            200           1272
-------------------------------------------------------------------------------
~/mozilla/bleach>

Lines of code for Bleach 2.0:

~/mozilla/bleach> cloc bleach/ tests/
      49 text files.
      49 unique files.
      36 files ignored.

http://cloc.sourceforge.net v 1.60  T=0.13 s (101.7 files/s, 20128.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          13            545            406           1621
-------------------------------------------------------------------------------
SUM:                            13            545            406           1621
-------------------------------------------------------------------------------
~/mozilla/bleach>

Some off-the-cuff performance benchmarks

I ran some timings between Bleach 1.5 and various uses of Bleach 2.0 on the Standup corpus.

Here's the results:

what? time to clean and linkify
Bleach 1.5 1m33s
Bleach 2.0 (no code changes) 41s
Bleach 2.0 (using Cleaner and Linker) 10s
Bleach 2.0 (clean and linkify--one pass) 7s

How'd I compute the timings?

  1. I'm using the Standup corpus which has 42000 status messages in it. Each status message is like a tweet--it's short, has some links, possibly has HTML in it, etc.
  2. I wrote a timing harness that goes through all those status messages and times how long it takes to clean and linkify the status message content, accumulates those timings and then returns the total time spent cleaning and linking.
  3. I ran that 10 times and took the median. The timing numbers were remarkably stable and there was only a few seconds difference between the high and low for all of the sets.
  4. I wrote the median number down in that table above.
  5. Then I'd adjust the code as specified in the table and run the timings again.

I have several observations/thoughts:

First, holy moly--1m33s to 7s is a HUGE performance improvement.

Second, just switching from Bleach 1.5 to 2.0 and making no code changes (in other words, keeping your calls as bleach.clean and bleach.linkify rather than using Cleaner and Linker and LinkifyFilter), gets you a lot. Depending on whether your have attribute filter callables and linkify callbacks, you may be able to just upgrade the libs and WIN!

Third, switching to reusing Cleaner and Linker also gets you a lot.

Fourth, your mileage may vary depending on the nature of your corpus. For example, Standup status messages are short so if your text fragments are larger, you may see more savings by clean-and-linkify in one pass because HTML parsing takes more time.

How to upgrade

Upgrading should be straight-forward.

Here's the minimal upgrade path:

  1. Update Bleach to 2.0 and html5lib to >= 0.99999999 (8 9s).
  2. If you're using attribute callables, you'll need to update them.
  3. If you're using linkify callbacks, you'll need to update them.
  4. Read through version 2.0 changes for any other backwards-incompatible changes that might affect you.
  5. Run your tests and see how it goes.

Note

If you're using html5lib 1.0b8, then you have to explicitly upgrade the version. 1.0b8 is equivalent to html5lib 0.9999999 (7 9s) and that's not supported by Bleach 2.0.

You have to explicitly upgrade because pip will think that 1.0b8 comes after 0.99999999 (8 9s) and it doesn't. So it won't upgrade html5lib for you.

If you're doing 9s, make sure to upgrade to 0.99999999 (8 9s) or higher.

If you're doing 1.0bs, make sure to upgrade to 1.0b9 or higher.

If you want better performance:

  1. Switch to reusing bleach.sanitizer.Cleaner and bleach.linkifier.Linker.

If you have large text fragments:

  1. Switch to reusing bleach.sanitizer.Cleaner and set filters to include LinkifyFilter which lets you clean and linkify in one step.

Many thanks

Many thanks to James Socol (previous maintainer) for walking me through why things were the way they were.

Many thanks to Geoffrey Sneddon (html5lib maintainer) for answering questions, helping with problems I encountered and all his efforts on html5lib which is a huge library that he works on in his spare time for which he doesn't get anywhere near enough gratitude.

Many thanks to Lonnen (my manager) who heard me talk about html5lib zero point nine nine nine nine nine nine nine nine a bunch.

Also, many thanks to Mozilla for letting me work on this during slow periods of the projects I should be working on. A bunch of Mozilla sites use Bleach, but none of mine do.

Where to go for more

For more specifics on this release, see here: https://bleach.readthedocs.io/en/latest/changes.html#version-2-0-march-8th-2017

Documentation and quickstart here: https://bleach.readthedocs.org/en/v2.0/

Source code and issue tracker here: https://github.com/mozilla/bleach

Who uses my stuff?

, | Tweet this

Summary

I work on a lot of different things. Some are applications, are are libraries, some I started, some other people started, etc. I have way more stuff to do than I could possibly get done, so I try to spend my time on things "that matter".

For Open Source software that doesn't have an established community, this is difficult.

This post is a wandering stream of consciousness covering my journey figuring out who uses Bleach.

Read more…

Everett v0.8 released!

, | Tweet this

What is it?

Everett is a Python configuration library.

Configuration with Everett:

  • is composeable and flexible
  • makes it easier to provide helpful error messages for users trying to configure your software
  • can pull configuration from a variety of specified sources (environment, ini files, dict, write-your-own)
  • supports parsing values (bool, int, lists of things, classes, write-your-own)
  • supports key namespaces
  • facilitates writing tests that change configuration values
  • supports component architectures with auto-documentation of configuration with a Sphinx autoconfig directive
  • works with whatever you're writing--command line tools, web sites, system daemons, etc

Everett is inspired by python-decouple and configman.

v0.8 released!

As I sat down to write this, I discovered I'd never written about Everett before. I wrote it initially as part of another project and then extracted it and did a first release in August 2016.

Since then, I've been tinkering with how it works in relation to how it's used and talking with peers to understand their thoughts on configuration.

At this stage, I like Everett and it's at a point where it's worth telling others about and probably due for a 1.0 release.

This is v0.8. In this release, I spent some time polishing the autoconfig Sphinx directive to make it more flexible to use in your project documentation. Instead of having configuration bits all over your project, you centralize it in one place and then in your Sphinx docs, you have something like:

.. autoconfig:: path.to.AppConfig

and it happily spits out the relevant configuration documentation. For example, here's Antenna's configuration documentation.

It'd be nice if configuration variables showed up in the index. I'll mull over that later.

Where to go for more

For more specifics on this release, see here: https://everett.readthedocs.io/en/latest/history.html#january-24th-2017

Documentation and quickstart here: https://everett.readthedocs.org/en/v0.8/

Source code and issue tracker here: https://github.com/willkg/everett